Have you ever wanted to be a part of something BIG? Do you have experience integrating new and emerging technologies into existing environments as technologies evolve and demands change? Are you eager to work with open source software and be exposed to the fast-paced world of Big Data technology? In this role, you can make an immediate impact for a Leading Global Technology Company, MasterCard. The Engineer role is responsible for working with teams across the company to implement new data solutions while maintaining the stability of the platform. You will be responsible for assessing technologies and approaches for ingestion, transformation and storage. In addition, you’ll work within the Fraud Data Engineering team to grow their knowledge and expertise. You will get the chance to work with extremely large data sets and be on the cutting edge of transforming the way Mastercard captures, processes, stores and visualizes transactional data.
- Develop scalable streaming solutions based on Spark, Kafka and/or Flume
- Work closely with team members from across Mastercard to identify functional and system requirements
- Design, develop and implement data models with quality and integrity at the top of mind to support our products
- Develop software utilizing open source technologies to interface distributed and relational data solutions
KNOWLEDGE AND SKILL REQUIREMENTS
- BS/BA degree in Computer Science, Information Systems or related field
- 1-3 years of experience using Hadoop or related technologies such as Hive, Impala, Solr, HBase and/or AVRO
- Experience working with the Cloudera stack; Kafka, Spark, Flume, Hadoop, etc.
- Understanding of Data architecture, replication, and administration
- Avid backend experience using; Java, Scala, etc.
- Excellent oral and written communication
- Excellent problem-solving skills
OTHER VALUABLE SKILLS
- Prior experience w/Business Intelligence Technologies
- Agile/Scrum methodologies
- Experience in working with real-time or near real-time ingestion
- Deep understanding of high-performant data concepts such as file stores, wide column databases, key-value pairs, etc.