Current Openings  > Big Data Architect
Big Data Architect
7-12 Yrs Herndon, VA, Washington, D.C. Apply

Job skills & Qualifications

  • B.Tech/B.E Graduates & MCA, MSc IT Under Graduates, with min 7+ years of hands-on experience in Software development and design.
  • Strong and effective inter-personal and communication skills and the ability to interact professionally with a diverse group of clients and staff
  • Ability to handle multiple projects

Responsibilities

  • Ability to design solutions independently based on the high-level architecture.
  • Manage the technical communication between the survey vendor and internal system.
  • Maintain the production systems (Kafka, Hadoop, Cassandra, Elasticsearch)
  • Lead the design, implementation, and continuous delivery of a sophisticated data pipeline supporting development and operations
  • Review current database architecture and propose solutions to scale processing and reporting functionality
  • Work closely with business stakeholders, product owners, and technical staff to design industry-leading solutions
  • Proactively analyze and bring forth ideas for continuous improvement of the platform
  • Building a cloud-based platform that allows secure development of new applications
  • Ability to proactively identify, troubleshoot and resolve live database systems issues

Technical Skills

  • 4+ years of experience as a Big Data Engineer or similar role
  • Expert in Business Intelligence (BI) and Data Warehouse enterprise platforms
  • 4+ years of experience in Microsoft data stack: SQL Server, SSIS, SSAS, SSRS, etc.
  • Strong knowledge in NoSQL topologies using tools like Hadoop, Elastic, Cassandra, Mongo, etc.
  • Experience in data streaming technologies like Kinesis and Kafka
  • Experience with .Net development and web development
  • Working knowledge of current hardware systems and disk subsystems commonly used in fault tolerant production environments
  • 4+ years of experience in Hadoop ecosystem and different frameworks inside it – HDFS, YARN, MapReduce, Apache Pig, Hive, Flume, Sqoop, and Kafka
  • Design, develop, document and architect Hadoop applications
  • Work with Hadoop Log files to manage and monitor it
  • Develop MapReduce coding that works seamlessly on Hadoop clusters
  • Expertise in newer concepts like Apache Spark and Scala programming
  • Experience in the automotive industry
Our Latest Thoughts
Join Our Newsletter