Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/iE9fqGUDJCjwNBTh7
Back to the job results

Custom Application Architect

3 days ago 2026/08/25
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Project Role : Custom Application Architect
Project Role Description : Provide functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests.
Must have skills : Databricks Unified Data Analytics Platform
Good to have skills : Cloud Data Architecture
Minimum 3 year(s) of experience is required
Educational Qualification : 15 years full time education
Summary:
As an Application Architect, you will provide functional and technical expertise to plan, analyze, define, and support the delivery of future functional and technical capabilities for an application or group of applications. Your typical day will involve collaborating with various teams to assess impacts, producing estimates for client work requests, and ensuring that the applications meet the evolving needs of the organization. You will engage in discussions to identify potential improvements and innovations, while also guiding the team in implementing best practices and methodologies to enhance application performance and user experience.

Roles & Responsibilities:
Work as part of the data engineering team to build, maintain, and optimize scalable data pipelines for large-scale data processing.
Develop and implement ETL/ELT processes using PySpark, Spark, and other relevant tools to move and transform data from various sources.
Assist in designing and deploying solutions in major cloud platforms such as AWS, Azure, or GCP.
Support the development and maintenance of Big Data processing frameworks and data lakes to handle structured and unstructured data.
Collaborate with data scientists, analysts, and other engineers to ensure data accuracy and availability.
Implement data ingestion strategies, ensuring the secure and efficient movement of data across different storage solutions.
Work on real-time streaming data pipelines and batch data processing to handle high-volume workloads.
Develop and maintain reusable code for data extraction, transformation, and loading (ETL) operations.
Contribute to performance tuning of Spark jobs and data pipelines to ensure scalability and efficiency.
Assist in maintaining governance and data security practices across cloud platforms
Professional & Technical Skills:
- Experience with AWS, Azure, or GCP for data engineering workflows.
Strong proficiency in PySpark, Spark, or similar frameworks for building scalable data pipelines.
Understanding of Big Data architectures, data storage, and data processing concepts.
Familiarity with cloud-native data storage solutions such as S3, Blob Storage, BigQuery, or Redshift.
Experience with data orchestration tools like Apache Airflow or similar.
Knowledge of data formats like Parquet, Avro, or JSON.
Strong coding skills in Python for building data pipelines.
Good understanding of SQL and database technologies.
Excellent troubleshooting, debugging, and performance optimization skills

Additional Information:
- Experience with real-time data processing and Kafka or similar tools.
Exposure to CI/CD pipelines and DevOps practices in cloud environments.
Familiarity with dbt (Data Build Tool) for data transformation workflows.
Experience with other ETL tools like Informatica, Talend, or Matillion.
Education Qualification:
- A 15 years full time education is required.

This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.