Amazon Off Campus Jobs Walkin Drive and Recruitment Eligibility, Careers, Salary, Syllabus, Exam Pattern, Selection Process: Amazon has announced a job notification for the post of Data Engineer. A student from various disciplines can apply for the Amazon Recruitment drive 2023. Interested and eligible candidates can read more details below
DESCRIPTION
Job summary
As a Data Engineer, you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data.
Data Engineers help us design, manage, and continuously enhance our Analytics needs.
You will develop and deliver analytics applications including metrics generation, metrics correlation, modeling, and many other use cases to help improve process effectiveness, customer experience, and automation.
As a DE, you will deal with technical aspects of data warehousing (Redshift, Spectrum, EMR, ETL), infrastructure and build data pipelines, tools, and reports that enable program managers, analysts, BIE’s, solution architects, and executives to design and deliver bench marking services for Amazon’s business units.
Job Locations:
Bangalore, Chennai, Gurgaon, Hyderabad
Key job responsibilities
• Design data schema and operate internal data warehouses and SQL/NOSQL database systems
.• Design data models, implement, automate, optimization and monitor data pipelines
• Own the design, development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions
• Analyze and solve problems at their root, stepping back to understand the broader context
• Manage Redshift/Spectrum/EMR infrastructure, and drive architectural plans and implementation for future data storage, reporting, and analytic solutions• Work on different AWS technologies such as S3, Redshift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency
• Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc.
• Work on SQL technologies on Hadoop such as Spark. Hive, Impala etc.
• Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation.
• Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment
.• Conduct rapid prototyping and proof of concepts
• Conceptualize and develop automation tools for bench marking data collection and analytics
• Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies
BASIC QUALIFICATIONS
• Enrolled in Bachelor or Master degree in Computer Science, Engineering, Mathematics, or a related technical discipline.
• Industry experience in Data Engineering, BI Engineer, or related field.
• Hands on experience in building big data solution using EMR/Elastic Search/Redshift or equivalent MPP database.
• Hands-on experience and advanced knowledge of SQL and scripting languages such as Python, Shell, Ruby etc.
• Hands-on experience in working with different reporting/visualization tools available in the Industry.
• Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts
PREFERRED QUALIFICATIONS
• 0-2 years of experience as a Data Engineer, BI Engineer, or related field in a company with large, complex data sources.
• Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis and Lambda) or equivalent industry tools
• Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.
• Experience on working with different SQL/NOSQL databases.
• Knowledge on data lake platform and difference technologies used in data lake to retrieve and process the data.
0 Comments