Work type

Locations

India

Categories

Your Experience Level

Our application process

At Alshaya, we make thousands of job offers every year, and we look forward to welcoming successful candidates to our growing business.

Here are some important facts you need to know about our processes, so you can be sure that your job offer is genuine.

  • We never ask candidates to pay fees or send us money.
  • We never ask candidates to give personal information such as date of birth, address, passport details, bank details, etc.
  • You always deal directly with us and all communication will come from an official ‘@alshaya.com’ email address, or through an affiliated Alshaya agency. To check if you’re dealing with an affiliate, you can email us at alshayajobs@alshaya.com

Receiving a job offer

If your job offer seems too good to be true, it probably is. There are three key things to remember if you suspect an offer is not genuine:

  • Do not contact the original sender
  • Do not provide any personal information
  • Do not make any payment

If you have concerns and wish to confirm a job offer is genuine, email us at alshayajobs@alshaya.com. Please include a photo or screenshot of the message you have received (please do not forward the original).

Note: Please do not send your CV to the email address listed above as it will not be considered as an application for work.

Click here to know more about our Job Offer process.



Data Engineer - Data Center

Apply now
Job Number:
IND2025-CTO03
Work type:
Permanent - Full Time
Location:
India
Categories:
Mid-Senior Level

Role Profile:

As a Data Engineer you are expected to contribute towards

  • Create and maintain optimal data pipeline architecture,
  • Assemble large data sets that meet functional / non-functional business requirements.
  • Design and implement internal process improvements: automating manual processes, and optimizing data delivery etc.
  • Help build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Python,  SQL and GCP ‘big data’ technologies.
  • Work with stakeholders including Product, Data Integration and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Create data tools for BI reporting, digital analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

 

 

The below Key Performance Areas include but are not limited to:

  • Takes a breadth first approach to problem solving and encourages systematic thinking.
  • Good communicator and collaborator, able to simplify concepts for wider non-tech audiences.
  • Able to demonstrate a passion for data engineering with the ability to work cross functional.
  • Culturally sensitive and able to work across cultures and time zones.

 

 

 

Experience

  • Experience of designing, building and maintaining scalable ETL data processing systems using Python, Java, Scala or similar programming language – this is a hands-on coding role.

 

  • Understanding and experience in data processing techniques and analytics with Strong SQL experience.

 

  • Understanding and experience of GCP data tools (GBQ, Dataflow, DataProc, GC Data Transfer etc.)

 

  • Understanding and experience of using and creating CICD pipelines using Azure Devops, Jenkins, GoCD or any similar technologies

 

  • Understanding and experience of using orchestration tools such as Apache Airflow, Azure Datafactory, AWS Data pipelines or similar

 

  • Understanding and experience of working with Cloud Deployment Environments (GCP or AWS preferred)

 

  • Proponent for software craftmanship, TDD and automated testing

 

  • DevOps background with understanding of cloud-based IAC technologies
    • Terraform or serverless
    • Github as version control
    • Familiar with CI/CD

 

  • Uunderstanding of Data Modelling and/or Data Warehousing techniques such as star schema or raw data vault

 

  • Experience of working with large-scale data sets, this includes both structured and unstructured data such as parquet, avro, json etc.

 

Advertised: India Standard Time
Applications close: India Standard Time

Back Apply now Refer a friend