Associate, Applications Support
Click the Facebook, Google+ or LinkedIn icons to share this job with your friends or contacts. Click the Twitter icon to tweet this job to your followers. Click the link button to view the URL of the job, which then can be copied and pasted into an e-mail or other document.
Wilmington, DE 19803
Duties: Design, analyze, develop, review, test, deliver and support Bigdata software and/or new BI/Analytic products to support fast prototyping or proof of concept delivery on firm-wide programs. Work on development, coding, testing and application programming to solve complex and mission critical problems, internally and externally. Embrace leading-edge technologies and methodologies to support the business and work closely with Database, Bigdata engineering, Cloud Architecture, Risk, and Development teams. Demonstrate strong understanding of Hadoop Platform (Cloudera or Hortonworks). Integrate new tools with JPMC internal infrastructure Keon, Active Directory, EPV, Splunk, and Netcool. Utilize Hadoop hive, HBase, spark, and Yarn. Administer Big Data tools and technologies with Hadoop Ambari/CDH, Spark, Hive, Impala, Trifacta, and Atscale. Work with business intelligence and reporting tools such as Tableau and/or Cognos. Work with Operating Systems (Unix/Linux, Windows), complex database, and Data warehouse technologies, NoSQL data stores such as HDFS, Cassandra, Scripting and automation tools, and Container platforms (Docker/Kubernetes). Work on Cloud development stacks like AWS, Azure or GCP. Responsible for horizontally scalable and highly available system design and implementation, with focus on performance and resiliency. Responsible for profiling, debugging, and performance tuning for complex distributed systems.
Minimum education required: Bachelor’s degree or equivalent in Computer Science, or related field.
Minimum experience required: 5 years of experience in Big data engineering, Cloud architecture and BI Analytics or related experience.
Skills required: Experience planning and designing application architecture for various Big Data tools. Experience implementing Big data Consumption applications using BI Analytic tools. Demonstrated knowledge of Linux, Unix and Windows operating systems. Experience with the Hadoop platform administration in Hortonworks or Cloudera Cluster Environment. Demonstrated knowledge of at least three (3) of the following tools: HDFS, Hive, Spark, Yarn, Impala, Sentry, Ranger, or Zookeeper. Demonstrated knowledge of at least 2 of the following programming languages: SQL, Python, Perl, bash or ksh. Experience integrating at least two (2) of the following infrastructure platform services: SSO, Kerberos, SSL, LDAP, Windows-AD, Keon or EPV. Experience incorporating at least two (2) of the following monitoring and scheduling tools: Splunk, Dynatrace, Netcool, Tivoli or Control-M. Experience working with Cloud development stacks Pivotal cloud foundry, Kubernetes, AWS or GCP for Hybrid Configurations. Employer will accept any amount of professional experience with the required skills.
To apply for this position: Write job code CC-PS-AAS-JP on your resume and email to firstname.lastname@example.org. Your resume must reference job code CC-PS-AAS-JP. JPMorgan Chase is an Equal Opportunity and Affirmative Action Employer, M/F/D/V.