Greetings for the day…!
Hope Everything is going good.
This is Ravi from Saxon Global Inc. I am currently working on an immediate role, please find the below JD.
Big Data Developer
For the Big Data position, the candidate will work on the following process, and thus should know how to use the tools listed:
- They start the process with SysLog-NG Client and SysLog-NG Server but knowledge of these is not critical, just nice to have (i.e., resumes with this will be favored).
- Next the process uses Flume and Kafka around the Hadoop Cluster.
- It will be nice if they have exposure to the Cloudera Hadoop cluster, but it is OK if they have knowledge of other vendor solutions for Hadoop instead, like HortonWorks.
- Then, the process uses the ELK stack, i.e., ElasticSearch-LogStash-Kibana.
- (but not having all of these on the resume should not be a show stopper)
- Any candidate who says they have worked on this, should have written Spark jobs using Scala or Java. Thus, the candidates are most likely from a Java background, because that is how they can work on Scala and the Scala/Java jobs will have their processes running on JVM (Java Virtual Machine).
- Finally, because of these Spark jobs, we may see some reference in the resumes to using Python.
Saxon Global Inc.
p: (972) 550-9346 ext:220 d: 972-573-3646
Greenway Drive, Suite #660, Irving, TX
w: www.SaxonGlobal.com e: firstname.lastname@example.org