Data Engineer (For OPT/CPT Candidates)- Immediate Hiring
Job Description: Data Engineer (For OPT/CPT Candidates)
Need to work on our W2,
No H1b's
Duration : Long- term
Position Overview: We are seeking a highly motivated Data Engineer to join our dynamic team. This position is specifically open to candidates on OPT (Optional Practical Training) or CPT (Curricular Practical Training). As a Data Engineer, you will play a key role in developing, optimizing, and maintaining data pipelines that drive the flow of data within the organization. You will work with large datasets and collaborate with data scientists, analysts, and other stakeholders to ensure that high-quality data is available for analysis and business decision-making.
Key Responsibilities: ? Data Pipeline Development: ? Design, develop, and maintain scalable ETL (Extract, Transform, Load) pipelines to process and transform data from various sources into usable formats. ? Automate data workflows to enable seamless data flow and enhance efficiency. ? Database Management: ? Manage databases (SQL/NoSQL) to ensure data availability, integrity, and security. ? Perform regular database optimization tasks for performance improvements. ? Data Integration: ? Integrate data from various internal and external sources such as APIs, data lakes, and third-party applications. ? Implement data pipelines that provide clean, accessible, and real-time data for analytical purposes. ? Data Quality Assurance: ? Ensure the quality and consistency of data through the implementation of validation checks and automated testing. ? Identify and troubleshoot data-related issues, proposing solutions for improvements. ? Collaboration: ? Collaborate with data scientists, analysts, and cross-functional teams to understand data requirements and deliver solutions that meet business needs. ? Provide support to other teams by making data available and helping optimize analytical workflows. ? Cloud Services & Tools: ? Work with cloud platforms like AWS, Google Cloud, or Azure to store and process data efficiently. ? Familiarity with data services such as S3, Redshift, BigQuery, and related tools for scalable data storage and processing. ? Optimization: ? Monitor and optimize data processing pipelines for efficiency, ensuring the timely processing of large datasets. ? Implement caching, indexing, and other performance-enhancing techniques to minimize processing times. ? Documentation: ? Document data flows, pipeline architecture, and processes to ensure clarity and consistency across teams.
Required Skills and Qualifications: ? Must be on OPT (Optional Practical Training) or CPT (Curricular Practical Training) status, authorized to work in the U.S. ? Bachelor's degree in Computer Science, Data Engineering, Information Technology, Mathematics, or a related field. ? Knowledge of SQL and experience with relational and NoSQL databases (e.g., MySQL, MongoDB). ? Familiarity with Python or Java for scripting and automation tasks. ? Experience with data integration tools and ETL processes. ? Understanding of cloud platforms and services such as AWS, Google Cloud, or Azure. ? Good understanding of data structures and algorithms. ? Ability to work with large datasets and solve complex data processing challenges. ? Strong problem-solving skills and attention to detail. ? Strong communication and collaboration skills.
Preferred Skills:
? Experience with big data technologies such as Hadoop, Apache Spark, or Kafka.
? Familiarity with containerization tools like Docker.
? Exposure to data warehousing and reporting tools (e.g., Tableau, Power BI).
? Understanding of data modeling and schema design.
? Familiarity with version control systems like Git.
Benefits:
? Competitive salary and performance-based incentives.
? Health, dental, and vision insurance.
? Flexible working hours and remote work options.
? Professional development and learning opportunities.
? Mentorship from senior engineers and data professionals.
Kindly drop your updated CV to [email protected]