100% online
Learn on your own time
6 months, 15-20 hrs/wk
Finish early by putting in more hours
Apply by:

Cohort starts

Data Engineering Boot Camp with Washington University in St. Louis

The Washington University Data Engineering Boot Camp is designed to train you on job-ready data engineering skills, including the core engineering mindset, tools and best practices. You’ll work on 15 technical mini-projects and 2 capstone projects covering end-to-end development processes. By the end of the course, you’ll have a complete data engineering skill set to succeed in a data team as an engineer.

This data engineering course will provide you in-depth knowledge in SQL, Python, data pipelines, data transformation, Spark and cloud services of AWS and Azure. Multiple data engineering courses and real-world projects will help you master core concepts and skills like creating production-ready ETL and pulling data from multiple data sources, building cloud data warehouses, data modeling and more.

Key Highlights of the Boot Camp

  • Washington University Technology and Leadership Center certificate of completion

  • Complete in 6 months 

  • Flexible online format, complete at your own pace

  • Regular 1:1 mentor session

  • Curriculum curated by industry experts

  • Work on 15 technical mini-projects and 2 end-to-end capstone projects 

  • Get access to student’s community who share similar interests

Washington University in St. Louis

Data Engineering Jobs in 2023

Bernhardt Schroeder reports in Forbes that according to the U.S. Bureau of Labor Statistics, the data science industry in the U.S. will grow by 28% by 2026. 

Below we have included some of the top brands hiring data engineers along with the average salary offered:

Amazon - $109,000

Airbnb - $169,316 

AT&T - $103,000

Facebook - $175,881 

Google - $127,000 

Microsoft  - $165,000 

Salesforce - $152,000

Source: analyticsinsight

Data Engineering Boot Camp Curriculum

Each module of this ~400 hour course covers key aspects of data engineering. These modules feature a combination of materials, including resources that teach critical theory, data engineering exercises and projects and career-related coursework.

Big Data Engineering

Learn Big Data using the Hadoop Ecosystem and the hottest technology in big data: Apache Spark.

  • Learn how to use one of the most popular softwares in Big Data right now, using batch processing and real-time processing

  • Translate complex analysis problems into iterative or multi-stage Spark scripts

  • Use Spark programming to explore and transform massive datasets at scale by writing high performing programs

Data Engineering in the Cloud

Learn fundamentals of cloud computing and designing data intensive applications using various cloud components.

  • Understand the core concepts of cloud computing (Compute, Networking, Security, Data security in-transit and at-rest)

  • Design highly available and scalable cloud solutions for Data Engineering using Azure (Data Factory, CosmosDB, Azure SQL DW, Azure HDInsight, Databricks)

Data Virtualization and Container-based Applications
  • Learn to use Docker, a widely used platform that developers and administrators use to build, ship and run distributed applications. 

  • Learn about Kubernetes, a production-grade system for managing complex applications with different running containers.

  • Convert your applications and data processing pipelines to container based applications

  • Develop your own Docker images using Dockerfiles and practice Docker Compose

  • Orchestrate containers to deliver scalable and reliable performance using Kubernetes

Streaming Data and APIs
  • Design pipelines to process Real-time Streams using Apache Kafka and Kafka Streams API

  • Design and test APIs for robust performance and security

  • Learn API best practices using real-world examples (e.g. graceful degradation, HTTP verbs, Request validation, Logging, exception handling, etc.)

Interacting with Data
  • Learn in-depth SQL, which forms the cornerstone of all relational database operations. 

  • Explore a large collection of business-related historical data that would be used to make business decisions

  • Learn how to build and organize complex queries to make them more readable with the WITH clause, and how to use set operations such as UNION, UNION ALL, EXCEPT and INTERSECT to combine tables.

Coding for Data Engineering
  • Get up to speed with Python by creating multiple data engineering related projects like data wrangling, web scraping, data parsing and streaming data from sources like Twitter

  • Learn the performance difference between data structures such as hash tables, stacks, queues and more. 

  • Use popular Algorithms (like Greedy techniques, Divide and Conquer, Dynamic programming, Network flow) to improve application performance

  • Master essential GIT skills to develop collaboratively in team

Build an Impressive Portfolio

The course will guide you to complete 15 technical mini-projects and 2 end-to-end capstone projects for you to build a strong portfolio to demonstrate your skills. This course has two capstone projects.

Guided Capstone

The guided capstone project will consist of several stages of solution design for a given problem statement. During this capstone, you will be required to work on a certain dataset to create a data pipeline. The guided capstone is designed to help you understand how various components that we learn throughout the course come together to form a robust data pipeline.

You’ll be required to collect data, use cloud resources with Spark and Hadoop to load and transform it, optimize your code to improve performance, create a pipeline to automate these steps and create a dashboard for monitoring the pipeline's performance and health.

Capstone Two

This open-ended capstone project is split up into two phases. Using a combination of tools and techniques that you’ve learned, you’ll build a reliable and scalable data processing pipeline. The objective here is to provide you the opportunity to tackle the kind of problems you want and give you the assistance to make sure your project stands out. 

  • We will first guide you to build a prototype, then guide you to design the architecture of your solution from scratch and ask questions (from your mentors and peers) to improve your solution. There will be continuous opportunities to seek guidance from an experienced Data Engineer and learn the skills of the trade.

  • After the working prototype of your data pipeline is created, you’ll scale and deploy it to the cloud. You’ll have to scale the prototype so it’s efficient, create a deployment architecture, run your code end-to-end with testing and build a monitoring dashboard to monitor pipeline health and resource utilization.

Boot Camp Student Support

You’ll complete this 100% online boot camp on your own time, but you’ll always have the support of a team throughout your experience. You’ll have access to:

  • A student advisor who you’ll work with throughout the program. They can answer any questions you have and help you overcome obstacles. 

  • A personal 1-on-1 industry mentor who you’ll meet with you regularly to discuss your projects and receive feedback. 

  • A career coach who can help you develop a tailored job search strategy based on your career goals. 

  • A slack community of other students who you can connect with. 

Washington University in St. Louis

Personal 1-on-1 Mentorship

Mentorship is a critical aspect of the Data Science Boot Camp. You’ll meet regularly with your mentor who holds you accountable, helps you grow and will impart real-world knowledge and advice. Our mentors are experienced data scientists; we only accept 1 in 12 applicants.

Meet some of our mentors:

Paras Doshi
Head of Decision Science
Akhil Raj
Director of Data Engineering
Zuraiz Uddin
Senior Machine Learning Engineer
Faisal Malik
Data Engineer

Data Engineering Boot Camp Prerequisites

This program is designed for applicants with:

  • Professional work experience in an analytical role, ideally working with SQL or as a software engineer using Python or Java or C++

  • Or a bachelor's degree in CS or other degree that involves extensive programming skills

Proficiency in SQL and basic Python skills are required.

Self taught programmers and graduates of other WashU boot camps who clear the technical skills survey can also enroll in the program.

Data Engineering Boot Camp FAQs

Can a data analyst become a data engineer?

Yes, it is possible. To switch careers from a data analyst to data engineer, it is important to know the concepts of data visualization, data wrangling, Python, R and machine learning. Learning is critical, take up learning courses and brush up your programming fundamentals.

Does data engineering require coding?

Yes, it is good to know the  fundamentals of Python and SQL programming languages for a data engineer’s role.

How much do data engineers make?

Average base salary for data engineers in the United States is $112,564. 

Source: Indeed

The national average salary for a Data Engineer is $1,14,561 in the United States.


Is a data engineer the same as a software engineer?

The difference between the both is their scope of work. While data engineering is a specialized role focused on building data systems and databases that can store, consolidate and retrieve data, software engineering is a broader role responsible for  building systems, applications, websites and tools.

What is big data engineering?

Big data engineering is the science of collecting data from different sources and transforming and storing it in a dedicated database that can support insights generation or the creation of Machine Learning-based solutions.

What is the difference between a data analyst and data engineer?

A data analyst is mainly responsible for analyzing and interpreting data to support decision making, while a data engineer is responsible for building and maintaining the systems and infrastructure that support the organization's data. Data analysts often use tools like SQL, Excel and statistical software to analyze data and create reports, while data engineers use programming languages like Java and Python to develop the systems that store and process data.

Will data engineering be automated?

While it is true that data engineering cannot be fully automated, the repetitive tasks are most likely to be automated in the future, allowing data engineers to focus better on complex tasks.

Can a non IT person become a data engineer?

A technical degree definitely helps, but you can become a data scientist even from a non-IT background. In our online  boot camp, we help you build your portfolio by working on real life projects that focus on solving real-world bottlenecks and inefficiencies.

More questions about the program?

Speak to our enrollment team by completing an application, email Carolina, our enrollment advisor, or explore more frequently asked questions

Washington University in St. Louis

Syllabus Request

Get Started

Upcoming Programs

This program is offered through the Washington University in St. Louis McKelvey School of Engineering in partnership with Springboard.

Washington University in St. Louis Technology and Leadership Center Lopata Hall, 5th Floor MSC 1141-0122-05 St. Louis, MO 63130

Copyright © 2023

Powered by Springboard

This program is offered through the Washington University in St. Louis McKelvey School of Engineering in partnership with Springboard.

Washington University in St. Louis Technology and Leadership Center Lopata Hall, 5th Floor MSC 1141-0122-05 St. Louis, MO 63130

Copyright © 2023

Powered by Springboard