Hybrid Lead Engineer- Big Data presso JCPenney
JCPenney · Dallas, Stati Uniti d'America · Hybrid
- Senior
- Ufficio in Dallas
Lead Engineer- Big Data
The Lead Engineer – Analytics (Advanced Software Engineer) will report to a Senior Manager - Analytics. Using Big Data technology, you will be responsible for designing, developing, implementing and supporting data analytics solutions utilizing cloud architecture.
Primary Responsibilities:
• Lead, build and deliver projects with high impact and cross-portfolio scope, complexity and collaboration with cross-functional teams, both business and technology
• Identify technical implementation options and issues
• Play a key role in defining the future Technology architecture and develop roadmaps of the systems, platforms and/or processes for the company across disciplines
• Provide technical leadership and coaching to other engineers and architects across the organization
• Act as expert and point of contact in cross-scrum business/technology/domain area
• Partner and communicate cross-functionally across the enterprise
• Provide the highest level of expertise for the specification, development, implementation and support of Analytical solutions (e.g. Datameer, Hive, Python, Redshift, Tableau)
• Explain technical solutions and issues to senior leaders in the company in non-technical, understandable terms
• Interact with business teams to gather, interpret and understand their business needs and create design specifications
• Foster the continuous evolution of best practices within the development team to ensure data quality, standardization and consistency
• Continuously seek Data Lake platform and data ingestion improvement opportunities that ensure optimal use of the environment and improve stability. Lead efforts to implement the solutions.
Core Competencies & Accomplishments:
• 10+ years of professional experience
• 4+ years of experience with Big Data technology and analytics
• JCPenney business domain knowledge preferred
• Proficient in query languages such as SQL, Hive
• Proficient in using Big Data tools (e.g., Hadoop, NoSQL DB, Hbase) and API development and consumption
• Proficient in data preparation and manipulation using Datameer tool
• Experience with one or more languages (e.g., Python, Java)
• Experience with SOA, IaaS, and Cloud Computing technologies, particularly in the AWS environment
• Experience with Hadoop/Hive, Sqoop and Spark
• Experience with ETL and ELT data modeling
• Experience with data visualization tools like Tableau
• Experience with continuous software integration, test and deployment
• Experience with agile software development paradigm (e.g., Scrum, Kanban)
• Ability to work within a dynamic environment with evolving requirements and capability goals
• Self-motivated. Capable of working with little or no supervision.
• Maintains up-to-date knowledge in the appropriate technical areas
• Strong written and verbal communication. Able to present and hold meetings with key business and technology partners, enterprise architects and vendors.