Mid Data Engineer (Databricks / PySpark)
Location: Washington, DC Metro Area / Northern Virginia
Work Schedule: Hybrid; candidates should be available to support onsite in the DC Metro Area 2–3 days per week or on an as-needed basis
Clearance: Active DoD Secret clearance required; Top Secret / SCI eligibility preferred
Citizenship: U.S. Citizenship required
Position Overview
IntelliTech is seeking a Data Engineer to support a large-scale DoD data platform and analytics environment. In this role, you will design, build, maintain, and optimize scalable data pipelines and data products that enable mission-critical analytics, reporting, and downstream data use cases in a complex enterprise setting.
The ideal candidate brings strong hands-on experience with Databricks, PySpark, Python, SQL, and cloud-based data engineering, along with a proven ability to work across large data environments, integrate disparate data sources, and deliver reliable, secure, and high-performing data solutions.
Key Responsibilities
- Design, develop, and maintain scalable batch and streaming data pipelines supporting enterprise analytics and data operations.
- Build and optimize Databricks and PySpark workflows for ingestion, transformation, validation, and processing of large datasets.
- Develop robust ETL/ELT solutions across structured and unstructured data sources, including APIs, relational databases, NoSQL databases, and cloud storage.
- Support data modeling, pipeline orchestration, data quality, lineage, and performance tuning across the data lifecycle.
- Implement and maintain cloud-native data engineering solutions in environments such as AWS, Azure, or GCP.
- Collaborate with engineers, analysts, data scientists, and other stakeholders to support reporting, analytics, and AI/ML data preparation workflows.
- Integrate data pipelines into DevSecOps / CI/CD workflows and support secure, repeatable deployment practices.
- Troubleshoot pipeline failures, improve observability, and optimize the performance and reliability of enterprise data services.
- Contribute to technical documentation, engineering standards, and best practices for enterprise-scale data development.
- Operate effectively within Agile / SAFe delivery environments supporting mission systems.
Required Qualifications
- Active DoD Secret clearance.
- Bachelor’s degree in Computer Science, Data Science, Engineering, Information Systems, or a related technical discipline, and 4+ years of relevant experience; or Master’s degree in a related field and 2+ years of relevant experience.
- Strong hands-on experience designing, building, and maintaining production-grade data pipelines and integration solutions.
- Strong experience with Databricks, Apache Spark, and PySpark in enterprise data environments.
- Strong programming skills in Python and SQL; experience with Java or similar languages is a plus.
- Experience implementing ETL/ELT pipelines and data integration processes across multiple sources and platforms.
- Experience with relational and non-relational databases, data modeling, and query optimization.
- Experience working in cloud-based data environments such as AWS, Azure, or GCP.
- Experience with source control, CI/CD pipelines, and modern software engineering practices.
- Ability to identify and resolve issues related to data quality, pipeline performance, and operational reliability.
- Strong communication and collaboration skills, with the ability to work effectively across technical and non-technical teams.
Preferred Qualifications
- Active Top Secret clearance with SCI eligibility.
- Experience supporting enterprise data platforms in DoD or other federal environments.
- Experience with data governance, metadata management, lineage, and access control concepts.
- Experience supporting AI/ML data preparation and feature engineering workflows.
- Experience with Airflow, Kafka, Docker, Kubernetes, or related orchestration and containerization technologies.
- Experience supporting data engineering solutions in secure or multi-environment enterprise settings.
- Relevant cloud or data engineering certifications.
Interview Process
- Video interview required and may include a technical assessment.
- Candidates should be prepared to clearly discuss:
- their hands-on experience with Databricks, PySpark, Python, and SQL
- examples of pipelines or data applications they have built from the ground up
- large-scale data challenges they have solved
- their experience with cloud, ETL/ELT, and enterprise data engineering best practices
Work Authorization / Clearance Sponsorship
At this time, IntelliTech will only consider candidates who currently possess an active Secret clearance or higher. Clearance sponsorship is not available for this role.
Compensation and Benefits
IntelliTech is committed to fair and equitable compensation practices. The annual salary range for this position is $110,000 to $180,000, and actual compensation will be based on a variety of factors unique to each candidate, including job-related skills, relevant experience, certifications, training, and level of seniority. IntelliTech uses the full breadth of the salary range in making compensation decisions.
IntelliTech also offers a comprehensive benefits package designed to support our employees’ well-being and professional growth, including health, dental, and vision insurance, a 401(k) plan, paid time off, professional development opportunities, and flexible work arrangements that promote work-life balance.
About IntelliTech
IntelliTech is a dynamic and forward-thinking small business specializing in Full Stack Engineering, Data Analytics, Cloud Solutions, and DevSecOps services. We support government and commercial clients in solving complex technical challenges through practical, mission-focused engineering and delivery excellence.
Equal Opportunity Employer
IntelliTech is an Equal Opportunity Employer and is committed to fostering a diverse and inclusive workplace. We encourage all qualified candidates to apply.