Big Data Developer (Spark / Scala / Python)
Company: S3
Location: Charlotte
Posted on: April 1, 2026
|
|
|
Job Description:
Job Description Big Data Developer (Spark / Scala / Python)
Location: Charlotte, NC (Hybrid) Duration: April 2026 – April 2028
(Long-term Contract) Industry: Financial Services / Enterprise Risk
Technology Overview We are seeking an experienced Big Data
Developer to support enterprise data and risk technology
initiatives within a large-scale financial services environment.
This role focuses on designing, developing, and optimizing
distributed data processing solutions using modern big data
technologies. The ideal candidate will have strong experience
working with Apache Spark, Scala, Python, and large-scale data
pipelines within the Hadoop ecosystem . You will collaborate with
engineering teams to build high-performance data processing systems
that support enterprise analytics, regulatory reporting, and risk
management initiatives. Key Responsibilities Big Data Development
Design and develop scalable big data applications using Apache
Spark, Scala, and Python . Build and optimize large-scale
distributed data processing pipelines . Work with large datasets to
support enterprise analytics, risk technology, and data platforms .
Data Engineering & ETL Develop and maintain ETL / ELT pipelines for
data ingestion, transformation, and processing . Integrate data
from multiple sources within Hadoop-based data ecosystems .
Optimize data processing workflows for performance, reliability,
and scalability . Big Data Technologies Work within the Hadoop
ecosystem including Hive, HDFS, Kafka, and HBase . Support data
platforms that process high-volume data workloads. Implement batch
and streaming data processing solutions . Automation & Scheduling
Develop automated workflows and job scheduling using Autosys or
similar scheduling tools . Improve operational efficiency by
automating data processing and pipeline management tasks .
Development Best Practices Maintain source code using Git / GitHub
version control . Follow Agile development practices and
collaborate with cross-functional engineering teams. Participate in
design discussions and contribute to data platform architecture
decisions. Data Governance & Compliance Support data governance,
security, and regulatory compliance requirements in financial
services environments. Ensure data integrity, accuracy, and secure
handling of sensitive enterprise data. Required Qualifications 5
years of software engineering or big data development experience
Strong experience with: Apache Spark Scala Python Experience
working with large-scale distributed data systems Experience
developing ETL pipelines and data processing frameworks Experience
with Git or other version control systems Preferred Qualifications
Experience working within the Hadoop ecosystem (Hive, HDFS, Kafka,
HBase) Experience with Autosys or other enterprise job scheduling
tools Experience in financial services or highly regulated
environments Experience with cloud data platforms or modern data
lake architectures Key Skills Apache Spark Scala Python Hadoop
Ecosystem (Hive, HDFS, Kafka, HBase) ETL / Data Pipelines Autosys
Job Scheduling Git / Version Control Distributed Data Processing
Data Governance & Compliance Enterprise Data Platforms
Keywords: S3, High Point , Big Data Developer (Spark / Scala / Python), IT / Software / Systems , Charlotte, North Carolina