I saw your resume on a job site. We are looking for Hadoop Engineer. Please call/email me if you are interested in applying for this position.
I will appreciate if you can forward this email to anyone who might be looking out for a change in same domain.
Job Title: Hadoop Engineer
Job Location: San Jose, CA
Duration: Contract 12+ Months
Designing, develop & tune data products, applications and integrations on large scale data platforms (Hadoop, Kafka Streaming, Hana, SQL server etc) with an emphasis on performance, reliability and scalability and most of all quality.
Analyze the business needs, profile large data sets and build custom data models and applications to drive the Adobe business decision making and customers experience
Develop and extend design patterns, processes, standards, frameworks and reusable components for various data engineering functions/areas.
Collaborate with key stakeholders including business team, engineering leads, architects, BSA’s & program managers.
The ideal candidate will have:
- MS in Computer Science / related technical field with 10+(level 5) years of strong hands-on experience in enterprise data warehousing / big data implementations & complex data solutions and frameworks
- Strong SQL, ETL, scripting and or programming skills with a preference towards Python, Java, Scala, shell scripting
- Demonstrated ability to clearly form and communicate ideas to both technical and non-technical audiences.
- Strong problem-solving skills with an ability to isolate, deconstruct and resolve complex data / engineering challenges
- Results driven with attention to detail, strong sense of ownership, and a commitment to up-leveling the broader IDS engineering team through mentoring, innovation and thought leadership
- Familiarity with streaming applications
- Experience in development methodologies like Agile / Scrum
- Strong Experience with Hadoop ETL/ Data Ingestion: Sqoop, Flume, Hive, Spark, Hbase
- Strong experience on SQL and PLSQL
- Nice to have experience in Real Time Data Ingestion using Kafka, Storm, Spark or Complex Event Processing
- Experience in Hadoop Data Consumption and Other Components: Hive, Hue HBase, , Spark, Pig, Impala, Presto
- Experience monitoring, troubleshooting and tuning services and applications and operational expertise such as good troubleshooting skills, understanding of systems capacity, bottlenecks, and basics of memory, CPU, OS, storage, and networks.
- Experience in Design & Development of API framework using Python/Java is a Plu
- Experience in developing BI Dash boards and Reports is a plus
Thanks & Regards,
Sr. Executive- Talent Acquisition
3084, Congressional Office Park, Route – 27, Suite # 3, Kendall Park, New Jersey – 08824 |
Phone: (732) 504-6942 | Cell #|Fax: (732) 398-0506 | www.fstonetechnologies.com |