Skip to main content

5 posts tagged with "case study"

View All Tags

Case Study: How Yad2 Simplified Schema Management with Atlas

· 5 min read
Noa Rogoszinski
Noa Rogoszinski
DevRel Engineer

Company Background

Yad2 is the most popular online marketplace in Israel for buying and selling second-hand items. Since launching in 2005, Yad2 has offered an organized platform for the sale of various goods, including vehicles, housing, rentals, furniture, electronics, and more. With millions of users and listings, Yad2 handles a significant amount of data and requires a robust database management system.

The Bottleneck: All Paths Lead to the DBA

Initially, all developers at Yad2 had permission to run DDL (Data Definition Language) commands, allowing each of them to alter the structure of the production database. However, as the team grew, engineers would run into conflicts and inconsistencies caused by parallel work. Developers would often overwrite each other's changes or introduce locks on the database, causing production issues. Requiring engineers to document their changes in JIRA or Slack was helpful, but it still proved difficult to keep track of everything that was happening.

To improve the reliability of schema changes, Yad2 brought on a DBA, Yoni Sade, to take the reins, and restricted the developers' permissions to DML (Data Manipulation Language) commands. This change meant that they could no longer make schema changes, leaving those tasks to the new DBA.

The motivation for this change was sound: by limiting schema changes to fewer, more experienced DBAs, they could prevent migration conflicts and ensure the integrity of the database structure. However, putting this limitation on the development environment created a significant bottleneck — every time a developer needed to make a schema change, they had to go through the DBA, slowing down the development process considerably.

Yad2 has a very complex and interconnected schema containing thousands of objects, so each change needs to be made very carefully. Even for a seasoned DBA, this process takes a lot of time and attention.

Searching for a Solution

Yoni began looking for a solution that would allow the team to have more agency over schema changes without risking the infrastructure's integrity. The team tested schema migration tools out on the market, but versioned workflows that still required manual planning without guaranteeing consistency between environments were not enough.

Ultimately, they were looking for a tool similar to Terraform, which they were already using for infrastructure as code. "We wanted to have a source of truth in code or a file-formatted representation of our schemas. Since we use Terraform, we wanted something similar that everybody would already feel comfortable with," Yoni said.

Choosing Atlas

After evaluating several options, Yoni's team came across Atlas, a database schema as code tool that implements concepts similar to Terraform but for databases. Not only did Atlas fit the description they were searching for, they also discovered more features that simplified the integration of Atlas into their existing workflow.

"We have many GitHub repositories, so integration with GitHub was also an important consideration," Yoni explained. Atlas's GitHub integration is a convenient way for teams to host their schema definition files on a shared platform, and there are Atlas Actions that can be used to automate the proposal, linting, and approval of schema changes on GitHub's platform. "I found Atlas's GitHub support very natural for us to use."

The Outcome

Choosing Atlas not only gave Yad2 a familiar way to manage their database schema as code, but it also provided a direct and easy to implement solution to their bottleneck by integrating with their already-existing GitHub repositories.

To make schema changes, a developer now creates a new branch in GitHub, makes the necessary changes to the schema files, and opens a pull request (PR). This automatically triggers Atlas Actions that write a migration based on the suggested changes, lint the proposed migration, and report the results in a comment on the PR.

The developer is now receiving direct feedback from Atlas about any potential risks from these changes and can work on them without needing to wait for the DBA. Moreover, the DBA gets a detailed account of the schema changes and can automatically apply them to production by simply merging the PR.

In addition to Atlas being a direct solution to their bottleneck, Yoni also noted other benefits of using Atlas:

  • Improved Communication. With schema changes proposed through pull requests, there is a clear record of what changes were made, why they were made, and who made them without relying on manual documentation.
  • Compliant Schema Changes. By using custom schema policies with the migrate/lint action, any proposed change that doesn't adhere to the team's schema policy is flagged in the CI process.
  • Tracking Across Environments. The Atlas Cloud UI provides users with an Entity Relationship Diagram (ERD) to visualize their schemas and schema changes, as well as drift detection to alert the team of any inconsistencies between the developement and production environments.

Getting Started

By managing your schema as code, using custom schema policy, and configuring Atlas Actions in CI/CD, you can automate migration creation and review so the DBA is left with only a final review and approval.

If your team is struggling with the bottleneck of many developers relying on fewer DBAs to deploy schema changes like Yad2 was, Atlas could be the solution for you.

→ Start with the quickstart

→ Join our Discord

→ And begin managing your schemas as code.

Case Study: How Beck's Hybrids Improved Reliability with Atlas

· 5 min read
Noa Rogoszinski
Noa Rogoszinski
DevRel Engineer

"I rely on Atlas because I know it works and it gets the job done."

– Hicaro Adriano, Principal Software Engineer, Beck's Hybrids

Company Background

Founded in 1937, Beck's Hybrids appreciates the farmers who have helped them grow to become the largest family-owned retail seed company and third-largest seed brand in the United States. This position gives Beck's access to the best genetics and trait technologies from suppliers worldwide. Beck's strives to provide all customers with the tools, support, and resources they need to succeed.

The Obstacle: Unreliable Migrations

The Beck's Hybrids IT department began with a few engineers managing mainframes. Over the years, the development team has grown in an effort to provide customers and internal users with various applications that help power all aspects of the business. These systems include "FARMServer®", a precision farming platform that helps farmers make decisions based on optimal planting windows and growth stage modeling.

FARMServer is a complex, Microsoft SQL Server-based application that has grown to include a wide range of features and functionalities. Its codebase heavily utilizes stored procedures, views, and custom types to handle the intricate logic required for precision farming.

Beck's has traditionally developed its technologies in-house, and that includes their schema management system for these applications. Their system was a semi-automatic process that required manually written migrations, which can be quite fragile.

Their process required developers to manually track which revisions had been applied, carefully compare that to the current state in production, and then apply the remaining changes one-by-one - "hoping they wouldn’t fail." With no batch transactions or safety mechanisms in place, even a small mistake often left production in an inconsistent state.

Case Study: How Darkhorse Emergency Tamed Complex PostgreSQL Schemas with Atlas

· 7 min read
Noa Rogoszinski
Noa Rogoszinski
DevRel Engineer

"When I came across Atlas and saw it described as Terraform for your database, it immediately resonated. That’s exactly what we needed. Just like Terraform solved our AWS problems, we needed something to bring that same level of control to our data."

– Maciej Bukczynski, Director of Technology, Darkhorse Emergency

Company Background

Darkhorse Emergency is a SaaS decision analytics platform for public safety services, primarily fire departments, that uses data and predictive analytics to optimize operations and resource allocation. Their platform allows for decisions to be simulated and assessed before being made, creating more transparency amongst public service teams and those that depend on them.

The Bottleneck: Evolving a Logic-Heavy Postgres Schema

"For us PostgreSQL isn't just storage. It's the core of our business logic. "

Darkhorse Emergency's platform is built on a complex PostgreSQL database that serves as the backbone for their application. It is an elaborate system that processes many types of data, including 911 calls, census reports, and other public data sources.

By maintaining a carefully designed chain of views, functions, custom types, and triggers, the team is able to offload complex calculations and logic to the database. This ensures that their application can efficiently handle the demands of public safety services. "For us PostgreSQL isn't just storage. It's the core of our business logic," said Maciej Bukczynski, Director of Technology at Darkhorse Emergency.

However, this complexity presents a significant challenge when it comes to evolving the database schema. With so much happening within the database itself, the team very quickly ran into the limitations that come with common migration tools. "For example, we might have a view that feeds into 50 other views; if we want to make a change to that, we need to carefully recreate dependencies and ensure that everything remains consistent", Bukczynski explained.

The team initially tried to use classic migration tools like Flyway and Liquibase, but found that manually planning and applying migrations in such an intricate system was not only time-consuming but error-prone.

Deep Dive into Declarative Migrations

· 15 min read
Rotem Tamir
Building Atlas

Prepared for an Atlas Community Webinar, October 2024

Introduction

In recent years, the shift to declarative resource management has transformed modern infrastructure practices. Groundbreaking projects like Terraform, for infrastructure as code, and Kubernetes, for container orchestration, have exemplified the power of this approach. By focusing on what the end state should be rather than how to achieve it, declarative methods make systems more scalable, predictable, and easier to maintain—essential qualities for handling today's complex environments.

However, when it comes to managing database schemas, the industry has been slow to adopt declarative workflows. Atlas was created almost four years ago to address this gap.

Atlas supports two kinds of database schema migration workflows:

  • Versioned Migrations - each change to the database is described as a migration script, essentially a SQL file containing the SQL commands to apply the change. Migrations are versioned and applied in order.

    Contrary to most existing migration tools, Atlas relies on users defining the desired state of the database schema in code Atlas generates the necessary migration scripts to bring the database to the desired state.

  • Declarative Migrations - the database schema is described in a declarative way, and changes are applied by comparing the desired schema with the current schema and generating the necessary SQL commands to bring the database to the desired state.

To date, most teams that used Atlas in production have used it's versioned migration workflow which synthesizes the simplicity and explicitness of classic migration tools with the benefit of automated migration generation.

Recent improvements to Atlas have addressed many of the challenges and concerns teams have expressed around using declarative migrations in production in the past. In this post, we'll take a deep dive into the declarative migration workflow

How Conceal.IO Manages 1,500+ Redshift Schemas Using Atlas

· 4 min read
Rotem Tamir
Building Atlas

"Everything on Atlas is just making too much sense for us."
— Kaushik Shanadi, Chief Architect

Conceal, a cybersecurity company, creates a secure browsing experience using a browser extension. With a lean engineering team, When Conceal shifted from serving individual consumers to working with managed service providers (MSPs), their clients' security requirements drove the need for a robust, multi-tenant architecture to ensure data isolation and scalability.

Kaushik Shanadi, VP and Chief Architect, led the charge in finding that solution.