Data Engineering Architect
Click the Facebook, Google+ or LinkedIn icons to share this job with your friends or contacts. Click the Twitter icon to tweet this job to your followers. Click the link button to view the URL of the job, which then can be copied and pasted into an e-mail or other document.
Wilmington, DE 19806
JP Morgan Chase is undertaking an aggressive digital transformation agenda within the Consumer and Community Bank (CCB) which serves over 50 million customers, and builds on the success of the our market leading mobile and online service offerings. JPMC is investing in innovative ways to deepen customer engagement and create the most compelling digital experience in the financial services industry. We are looking for talent that will help us position JPMC as the undisputed leader in digital financial services and payments, enabling JPMC to deliver highly personalized, real time experiences that wow our customers.
As CCB continues to transform, including advancing high velocity software engineering, agile practices and hybrid cloud strategy, solving the data engineering and data management aspects of cloud platform data services will be paramount. To meet this strategic need, a role was created for a forward looking data engineer who is passionate about data management lifecycle practices and can make sure discipline and rigor is built into the data services fabric that persists and moves data in a hybrid cloud environment. This lead will bring teams together to accelerate understanding, identify required features / capabilities and ensures delivery that is sustainable.
In this role, the Data Engineering Architect will be responsible for strategic leadership and execution that ensures data is controlled and compliant, at scale, in a hybrid cloud environment.
Key responsibilities include:
+ Identify and solve hybrid cloud data engineering and sustainable data governance as workloads and data move to hybrid cloud environments.
+ The engineering must place rigor on data management practices, ensuring transparency of data persisted and moved (lineage) while increasing capture and use of metadata to enable compliant and quality registration, ingestion, enrichment, provisioning/information delivery, data access, consumption, data migration and data lifecycle management
+ The data catalog and model registry must be engineered into the data management fabric
+ Identify and elevate capability/feature gaps and SRE telemetry so backlog is clear and sprints can close the gaps
+ Establish and drive a cohesive metadata strategy that delivers a metadata driven and metadata intelligent information architecture – coverage, propagation and use
+ Be familiar with Public and Private Cloud provider platforms and underpinning data services / data governance, to ensure an understood and integrated end to end view. Identify opportunities to take advantage of cloud provider and open source technology advancements and drive those position for decision
+ Bring teams together across the firm and in the industry to solve data management in a hybrid cloud environment for control at scale – must bring modern data engineering
+ Prepare for challenges related to microservices based architecture and private data (sourcing and reconciliation) in a hybrid cloud
+ Collaborate on driving maturity and discipline around containers for data persistence as well as inventory and resource tag and annotation strategies
+ Over 12 years of experience in data engineering and data management. Key areas include:
+ Experience in distributed and NoSQL database technologies like Hadoop and Cassandra along with data replication strategies, resiliency designs, distributed tracing
+ Experience in metadata and metadata driven applications.
+ Expertise in design of data pipelines and big data core data services.
+ Expertise in cloud based platforms (AWS), services and technologies related to big data, data management and templates (terraform) – S3, Kubernetes, Cloud Foundry, Glue, RDS, Redshif
+ Experience with Spark, Java / Spring (service implementations), Kafka, Spark Streaming, MQ (legacy)
+ Solid understanding of software delivery (CI/CD), artifacts, tooling automation and SRE
+ Solid understanding Domain Driven Design concepts applied to data engineering.
+ Experience in API Gateways (IBM Datapower, Apigee)
+ Experience with microservice and event driven architectures
+ Familiarity with Lambda and Kappa architectures for data intensive applications
+ Shows passion for hands-on work in the data engineering and data management space with a focus to bring them together to control at scale while improving data provisioning and consumption velocity
+ Excellent organizational, communication, influence and execution skills
JPMorgan Chase is an equal opportunity and affirmative action employer Disability/Veteran.