Landmark Health is hiring Big Data Engineers join a growing organization to support our Data Integration Team. Our Big Data Engineers will work side by side with our IT development team to assist in the design and development of our data feeds, data warehouses, data lakes and enterprise reporting. Our Big Data Engineers will have a strong working knowledge of ETL development, using SQL Server Integration Services (SSIS) and ability to liaison between our internal stakeholders, business partners and IT teams to determine requirements and apply business rules to data. Provide recommendations to data modeling, data ingestion of our health plans partners data and implement H-Scale tool built on top of Hadoop eco-system to integrate, scale and manage healthcare data. If you have a passion for Big Data Technologies/Hadoop, looking for long term career growth & opportunity then we want to hear from you!
Preferred locations: Huntington Beach - Corporate Headquarters, OR Ellicott CIty, MD or Los Angeles, CA.
If not in one of those locations, willing to consider full time work at home (REMOTE)
Unfortunately, VISA SPONSORSHIP is NOT offered for these rolesResponsibilities
• Architect, design, develop, implement, configure and document current and future Hadoop infrastructure
• Perform software installation and configuration, database backup and recovery, storage management, performance tuning, database connectivity and security
• Implement and support various components of the Hadoop ecosystem
• Monitor Hadoop cluster connectivity and performance.
• HDFS support and maintenance.
• Setting up new Hadoop users.
• Responsible for the new and existing administration of Hadoop infrastructure.
• Support data modelling, design and implementation,
• Work closely with infrastructure, network, database, business intelligence and application teams to ensure business applications are readily available and performing within agreed upon service levels
• Develop tools for automated build test deployment and management of the platform
• Continuously improve integration and delivery systems
• Bachelor’s degree in Computer Science, Information Systems, Information Technology, Management Information Systems, or equivalent experience is preferred
• 3+ years designing, developing and implementing solutions in Hadoop environments
• 3+ years supporting technologies in the Hadoop ecosystem
• Expertise with HDFS, YARN, MapReduce, Hive, Pig, Spark
• Proficiency with Hadoop technologies such as Nifi, HDFS, HBase, Flume and Sqoop
• Ability to write reliable, manageable and highly efficient code using Hadoop tools
• Proficiency with SQL scripting
• Working knowledge of Linux/Unix, distributed computing, networks and network security
• Strong problem solving and analytical skills
• Effective oral and written communication skills for both technical and non-technical audiences
• Excellent systems and data analysis skills
• Ability to prioritize and manage competing projects