Stay safe from recruitment fraud! The only way to apply for a position at Mobileye is via our jobs page. Mobileye will never ask an applicant to send any money or purchase any equipment. Not sure ? Contact us at: recruitment_mobileye@mobileye.com

/

R&D

Data Engineer for Software Engineering Group - Temporary Position

/

Jerusalem

/

Jerusalem

We are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in building and optimizing data pipelines, handling large datasets, and working within cloud-based and on-premise environments. You will be responsible for designing and implementing efficient, scalable solutions to manage, process, and analyze high volumes of data, ensuring the reliability and performance of our data infrastructure.

What will your job look like:

  • Design, build, and maintain scalable data pipelines to process large datasets efficiently
  • Develop and implement data models and architectures that support both real-time and batch data processing
  • Ensure data integrity, security, and accuracy across all systems
  • Collaborate with data scientists, analysts, and other engineers to ensure data availability and quality
  • Optimize data retrieval and storage processes to handle large volumes of data seamlessly
  • Work with structured, semi-structured, and unstructured data, integrating various data sources
  • Troubleshoot and resolve data issues, ensuring continuous operation of the data infrastructure
  • Maintain and enhance ETL processes, ensuring scalability and performance in handling large datasets
  • Stay up-to-date with industry best practices and emerging technologies related to big data engineering

All you need is:

  • Bachelor’s degree in Computer Science, Engineering, or a related field
  • At least 4 years experience in data engineering, with a focus on large-scale data processing and big data technologies
  • Strong proficiency in Python
  • Experience with data pipeline and workflow management tools
  • Hands-on experience with large-scale data processing frameworks like Apache Spark, Hadoop, or similar
  • Familiarity with data modeling, ETL processes, and data warehousing concepts like Table formats. Apache Iceberg, or similar
  • Good knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Cassandra, MongoDB)
  • Experience with AWS-based data lakes, including working with Amazon S3 for storage, querying data using Amazon Athena, and managing datasets stored in Parquet format
  • Significant ownership ability

Nice to have:

  • Knowledge of machine learning frameworks and integrating data pipelines for model training and deployment
  • Experience with version control systems (e.g., Git), CI/CD pipelines, and automation tools
  • Experience with containerization (Docker, Kubernetes) 

/

Save lives

The value of life above all other considerations.

/

Evolution as
revolution

Creating the autonomous future, leap by leap.

Geek proud

Our technology & problem-solving tackles the toughest challenges facing the industry.

/

Live the dream.
Stay humble.

We are coding a new reality. We are also understated, and work as a team.

/

Save lives

/
/

Geek proud

Live the dream.
Stay humble.

/
/

Evolution as
revolution

/

Link copied to clipboard