Home - Nextdeavor

Data Engineer 3

Job Title
Data Engineer 3
Job ID
San Jose,  CA
Other Location
Austin, TX
Data Engineer 3
12+ Months W2 Contract
San Jose, CA Area Preferred or Austin, TX or Remote

The Challenge:
The Opportunity Adobe DMe B2B analytics team is looking for a data engineer to build the foundational data sets and analytical reports for the B2B customer journey analytics. The data sets and reports will provide quantitative and qualitative analysis for acquisition, engagement and retention. In addition, the candidate will contribute to cross-functional projects to help business achieve its potential in terms of revenue, customer success, and operational excellence. For this role, we are looking for an ambitious and driven individual - an analytical with outstanding communication skills, high initiative, and strong leadership potential.

What you'll do:
  • Design and develop new systems and tools to enable folks to consume and understand data faster, along with maintain and enhancing existing ones  
  • Analyze the business needs, profile large data sets, design, develop and tune data products on big data platforms (Hadoop, Databricks) with emphasis on data quality and performance
  • Design, build and launch extremely efficient & reliable data pipelines to move data into and out of the Data Warehouse
  • Build foundational data sets and reporting solutions and make them available for consumption through dash boards and ad hoc reports
  • Develop and extend data models, processes, standards, frameworks, and reusable components for various data engineering functions/areas
  • Transform and migrate on-prem hosted data applications to cloud platform
  • Collaborate with key partners including product marketing teams, sales and support teams, engineering leads, architects, BSA's & program managers
What you'll need:
  • BA/BS in Computer Science, Engineering, Mathematics, or other technical fields. Masters is preferred
  • Proven experience of 7 years or more
  • Experience with Databricks, Hadoop, Hive, HDFS, and Big Data technologies
  • Experience with Databricks data science workspace, Presto, Tidal and Oozie workflows
  • Highly proficient in writing Apache Spark and Python queries
  • Good knowledge writing UDF’s in Hive and Spark 
  • Advanced working SQL knowledge and experience working with Cloud columnar databases, ACID transactions, Spark MLib, Machine Learning for tabular data
  • Excellent knowledge about ETL processes like Sqoop, Snap logic and Alteryx
  • Familiarity with cloud environments (AWS, Azure) • Sound knowledge of UNIX and shell scripting
  • Proficient in Excel, Power BI and Tableau
  • Familiarity with processes supporting data transformation, data structures, metadata, dependency and workload management
  • Knowledge about the best engineering practices such as Jira, version control system, CI/CD, code review, pair programming
  • Ability to quickly learn about Adobe business processes, products, data platforms and various tools
  • Ability to synthesize information across a wide variety of data sources including Sales Force, Adobe analytics, Campaign systems, Adobe internal business systems, SAP, Salesforce, Microsoft Dynamics etc.
  • Able to communicate effectively with business stake holders, engineers, and users

Option 1: Create a New Profile

©NextDeavor 2022