Description:
? L3Harris Enterprise Data, Analytics, and Automation team is seeking a Data Engineer with experience in managing enterprise-level data life cycle processes.
? This role includes overseeing data ETL/ELT pipelines, ensuring adherence to data standards, maintaining data frameworks, conducting data cleansing, orchestrating data pipelines, and ensuring data consolidation.
? The selected individual will play a pivotal role in maintaining ontologies, building scalable data solutions, and developing dashboards to provide actionable insights for the enterprise.
? This position will support the company?s modern data platform, focusing on data pipeline development and maintenance, platform design, documentation, and user training.
? The goal is to ensure seamless access to data for all levels of the organization, empowering decision-makers with clean, reliable data.
? Design, build, and maintain robust data pipelines to ensure reliable data flow across the enterprise.
? Maintain data pipeline schedules, orchestrate workflows, and monitor the overall health of data pipelines to ensure continuous data availability.
? Create, update, and optimize data connections, datasets, and transformations to align with business needs.
? Troubleshoot and resolve data sync issues, ensuring consistent and correct data flow from source systems.
? Collaborate with cross-functional teams to uphold data quality standards and ensure accurate data is available for use.
? Utilize Palantir Foundry to establish data connections to source applications, extract and load data, and design complex logical data models that meet functional and technical specifications.
? Develop and manage data cleansing, consolidation, and integration mechanisms to support big data analytics at scale.
? Build visualizations using Palantir Foundry tools and assist business users with testing, troubleshooting, and documentation creation, including data maintenance guides.
Requirements:
? Bachelor?s Degree and minimum 6 years prior Palantir experience or Graduate Degree and minimum 4 years of prior Palantir experience
? In lieu of degree, minimum 8 years of prior Palantir experience
? Experience with designing and developing data pipelines in PySpark, Spark SQL, SQL or Code Build.
? Experience in building and deploying data synchronization schedules and maintaining data pipelines using Palantir Foundry.
? Minimum of 4 years of experience with Data Pipeline development or ETL tools such as Palantir Foundry, Azure Data Factory, SSIS, or Python.
? Minimum of 4 years of experience in Data Integration.
? Strong understanding of Business Intelligence (BI) and Data Warehouse (DW) development methodologies.
? Hands-on experience with the Snowflake Cloud Data Platform, including data architecture, query optimization, and performance tuning.
? Proficiency in Python, PySpark, Pandas, Databricks, JavaScript, or other scripting languages for data processing and automation.
? Experience with other ETL tools such as Azure Data Factory (ADF), SSIS, Informatica, or Talend is highly desirable.
? Familiarity with connecting and extracting data from various ERP applications, including Oracle EBS, SAP ECC/S4, Deltek Costpoint, and more.
? Experience with AI tools such as OpenAI, Palantir AIP, Snowflake Cortex or similar.
Benefits:
? Health and disability insurance
? 401(k) match
? Flexible spending accounts
? EAP
? Education assistance
? Parental leave
? Paid time off
? Company-paid holidays