- Professional
- Bureau à Bentonville
Position Summary...
What you'll do...
Position SummaryWe are seeking a skilled Data Engineer II to design, build, and maintain scalable data pipelines and foundational data assets that power advanced analytics, machine learning, and business intelligence across the organization. In this role, you will collaborate closely with data scientists, analysts, and engineers to ensure data is accessible, reliable, and ready for insight generation. You will contribute to enterprise data engineering patterns, improve data quality and performance, and help evolve our modern data ecosystem. What You’ll Do
- Develop and optimize end-to-end data pipelines that ingest, transform, and deliver data across business domains.
- Design and implement data models for both structured and unstructured data to support analytics and operational needs.
- Build and maintain ETL/ELT workflows using Airflow, Dataflow, etc.
- Improve data quality through automated validation, monitoring, and anomaly detection frameworks.
- Implement scalable data processing using Dataflow, or similar technologies
- Write efficient SQL and code to support data transformation and performance optimization.
- Ensure data governance and security alignment, including metadata management and documentation.
- Partner with cross-functional teams to support analytical products and experimentation initiatives.
- Stay current with emerging data engineering tools and best practices to drive continuous improvement.
- Hands-on experience developing data pipelines and ETL/ELT processes in production environments.
- Proficiency with Python for data manipulation and automation.
- Strong SQL skills and experience working (BigQuery and Mongo preferred)
- Experience with cloud-based data ecosystems, preferably GCP
- Familiarity with version control, CI/CD, and engineering best practices.
- Ability to analyze complex datasets and troubleshoot pipeline performance issues.
- Clear communication skills with the ability to partner effectively across teams.
- Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related field and 2 years experience
- Experience with big data frameworks such as Airflow, Dataflow, etc.
- Knowledge of data lake and lakehouse architectures.
- Familiarity with streaming data technologies like Kafka.
- Interest in supporting ML feature pipelines and operationalizing data for AI initiatives.
Eligibility requirements apply to some benefits and may depend on your job classification and length of employment. Benefits are subject to change and may be subject to a specific plan or program terms.
For information about benefits and eligibility, see One.Walmart.
The annual salary range for this position is $80,000.00 - $155,000.00 Additional compensation includes annual or quarterly performance bonuses. Additional compensation for certain positions may also include :
- Stock
ㅤ
ㅤ
ㅤ
ㅤ
Minimum Qualifications...
Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications.
Option 1: Bachelor's degree in Computer Science or related field. Option 2: 2 years’ experience in software engineering or related field.Preferred Qualifications...
Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications.
We value candidates with a background in creating inclusive digital experiences, demonstrating knowledge in implementing Web Content Accessibility Guidelines (WCAG) 2.2 AA standards, assistive technologies, and integrating digital accessibility seamlessly. The ideal candidate would have knowledge of accessibility best practices and join us as we continue to create accessible products and services following Walmart’s accessibility standards and guidelines for supporting an inclusive culture.Masters: Computer Science