Big Data Developer
|Position||Sr. Associate/Lead/Associate Specialist|
|Total Experience||7 – 14 Years|
- Experience on core Java, OOPS concepts, multithreading, collection framework.
- Hands on experience on Spark.
- Experience with building stream-processing systems, using solutions such as Storm or Spark Streaming
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala.
- Proficiency with Hadoop v2, MapReduce, HDFS.
- In-depth knowledge and experience with Hadoop ecosystem and architecture (including HDFS & YARN) across multiple distros.
- Experience with various messaging systems, such as Kafka or RabbitMQ.
- Knowledge of various ETL techniques and frameworks, such as Flume
- Knowledge of GitHub, TeamCity, Sonar.
- Hands-on Agile, Scrum.
- Developing Hive Queries and creating views on Hive tables
- Experience in Partitioning
- Writing and reading data on HDFS like to and from CSV or Parquet File.
- Ability to work with huge volumes of data so as to derive Business Intelligence.
- Management of Hadoop cluster, with all included service
- Cloudera resource management like YARN.
Interested candidates can send their resumes at firstname.lastname@example.org