Modernization

College Scorecard Application

College Scorecard Application

Building an application to serve College Scorecard open data

Valerie Parham-Thompson

Still working on the College Scorecard dataset. Previously I explored the dataset in a real-world application, talked about how to clean the data, and worked with the data API.

Now I’ve decided I want to put this in a web application so others can use the dataset in the same flexible way that I have been using it. (Reminder that several popular college-search websites exist, but they are limited in the ways you can filter the data. Also they tend to gather personal data for I suppose ad generation.)

Finding the Right Yugabyte Api Endpoint

Finding the Right Yugabyte Api Endpoint

Tour through the YugabyteDB YBA API endpoints with a real-world example

Valerie Parham-Thompson

As YugabyteDB continues to evolve, its extensive API ecosystem offers powerful capabilities for database management and automation. However, with hundreds of API endpoints across overlapping categories, locating exactly the right API endpoint can be challenging. In this guide, I’ll walk you through several proven strategies for efficiently finding the API endpoints you need, along with real-world examples and pro tips I’ve learned from working with YugabyteDB’s API ecosystem.

Method 1: Navigating Categories in the API Documentation

The API documentation (api-docs.yugabyte.com) provides a well-organized categorical view of available endpoints. Understanding how to navigate these categories effectively will significantly speed up your API discovery process:

Handling Reserved Keywords in DSBulk for Seamless Data Migration

Handling Reserved Keywords in DSBulk for Seamless Data Migration

How to handle reserved keywords using Datastax DSBulk in YugabyteDB migration

Valerie Parham-Thompson

Migrating to YugabyteDB offers significant advantages in terms of high availability, global distribution, and horizontal scalability—features essential for managing modern database workloads. However, data migration can be a complex process, particularly when transforming your schema definition. Differences in datatype support, query syntax, and core features across systems can complicate the transformation.

One of the challenges is dealing with reserved keywords in the source schema that cannot be directly used in the target system. This can require changes not only in the database schema during transformation but also in application code and related tooling.

College Scorecard API

College Scorecard API

Mapping College Scorecard data using the API

Valerie Parham-Thompson

After I finished the YugabyteDB universe network mapping example, I started thinking about other things to map. Anything with latitude and longitude will work. College locations from my previous work on the College Scorecard data set were an obvious choice.

Previously, I had exported the data and transformed it to allow for sorting and analysis. That’s still a valid method if you want to play with the pull data set, since the API allows only page size of max 100 at a time. However, with the right filters, that might be enough, and the API is a quicker path to getting the data.

Plotly Network Map

Plotly Network Map

Using the Plotly library to work with geographic data

Valerie Parham-Thompson

I’ve added a new feature to the day 2 ops tool.

With the diagram command, you can create a map of your Yugabyte cluster overlaid on a map of the world. Here’s an example:

Yugabyte Network Map

The Plotly library is very powerful, with a lot of options. I used the network map option, which allows you to define nodes and the edges between the nodes. In this case, the nodes are an abstraction of the database instances in a YugabyteDB cluster, and the edges represent the network connections between them.

Code as Instructional Technology

Code as Instructional Technology

Writing an interactive command-line tool as a learning tool for YugabyteDB REST APIs

Valerie Parham-Thompson

I’ve had the chance to share my database expertise in a variety of venues: speaking at meetups and conferences, leading hands-on workshops, mentoring new technologists, and of course writing.

I had been brewing a new idea for sharing content when a great opportunity landed in my lap.

The idea was: share what I know about managing a specific database product in code. Instead of creating a runbook for how to set up replication, I would write code that sets up replication. The key part is that it would have to be well-organized, commented, and documented to be useful to learners. Making it interactive would help users understand the options and parameters as they chose the commands and added flags. Even the error statements would give them insight into how it all works.

Database transformation from SQL Server to YugabyteDB

Database transformation from SQL Server to YugabyteDB

Migrating data from SQL Server to YugabyteDB

Valerie Parham-Thompson

A database transformation and migration project takes solid planning and testing. I’ve found that three common changes required when transforming a SQL Server database to YugabyteDB YSQL are related to syntax, performance, and stored procedures. These will get you started on your transformation project.

Syntax

Transforming a schema from MS SQL to YugabyteDB requires some minor syntax changes. This is true for any cross-database transformation. The YugabyteDB YSQL API utilizes PostgreSQL syntax.

Open Source Database

Open Source Database

What does it mean to be a database engineer of multiple open-source databases?

Valerie Parham-Thompson

I’m an open-source database consultant. But which open-source database? Well, several of them.

I made the decision several years ago to take every opportunity to work with multiple databases. Why?

  1. Learning a new language teaches you more about your own. For example, taking time to understand sstables in Cassandra gave me more insight into how storage works in MySQL. Having these experiences across multiple databases forced me to question what I knew about internals, therefore deepening my understanding overall.

Cleaning College Scorecard Data

Cleaning College Scorecard Data

Tips on cleaning the College Scorecard data

Valerie Parham-Thompson

Cleaning the College Scorecard data before using it locally to query columns of interest to us in a college search allowed me to use correct datatypes and to fit the data into a Postgres table. In case you haven’t had a chance to see other walkthroughs of my automation process for various demo needs, the full Ansible setup will download data from a source and then load it into a table. Previously, I have used the process for MoMA art and artists data, and for generating a million-row table, and storing these in different YugabyteDB topologies. I leveraged this recently to load College Scorecard data for our child’s college search.

Timestamps in PostgreSQL Migration

Timestamps in PostgreSQL Migration

Handling timestamps across database systems like Postgres

Valerie Parham-Thompson

Math… the universal language. Timestamps, not so much.

The way we decide to denote date and time differs across both computer languages and human languages. The format also differs across implementations of SQL. For example, Oracle and Postgres allow very different formats to be entered in the timestamp data type.

Oracle allows a wide variety of punctuation in dates: hyphens, slashes, commas, periods, colons. Postgres supports a more limited list.