• Location:
  • Salary:
    negotiable
  • Job type:
    Full-Time
  • Posted:
    3 months ago
  • Category:
    Industrial Relations, IT & Computer, Logistics and operations, Management & Manufacturing, Manufacturer, Manufacturing Practices Specialist, Monitoring Specialist, Operations, Renewable energy
  • Deadline:
    16/02/2021
  • Job Level:
  • Languages:
    English

Who we are, and what we do

PEG delivers Pay-As-You-Go (PAYG) asset-based financing to consumers who lack both access to reliable electricity and formal banking services. PEG’s anchor product – a basic solar home system that includes six lights, a phone charger, a radio, and a TV – allows consumers living on $5-10 per day to access clean light for working and studying after hours, avoid harmful air pollution from kerosene based lighting solutions, and also build credit for additional products and services over time.

To date, PEG has raised over $45 million and has over 600 full time staff across Ghana, Ivory Coast,Senegal and Mali. PEG has also won numerous awards, including the prestigious 2017 Ashden International Award for excellence in sustainable energy, and was named as one of the “fastest growing companies in Africa” by the London Stock Exchange.

About the role

We are looking for an experienced, initiator and self-driven ETL developer as well as database administration specializing in building, optimizing & maintaining efficient data pipelines with a sound database administration knowledge. As a member of our multi-cultural team, you will be working on innovative, highly scalable and exciting projects. Passion for exploring different product processes beyond integration pipeline and utilizing existing knowledge and technical expertise to develop innovative solutions is a must. The successful candidate will be part of our Data & Technology team and works closely with critical business stakeholders, source teams and our third party analytics teams.

Requirements

Candidate Attributes –

You are:

Fantastic communicator, teamwork, and driving solutions to production
Committed to high product standards with an eye on delivery time and data quality
Creative to go beyond what the situation requires and act proactively.
An initiator of new approaches in providing solution to problems
Quick and effective in adapting the dynamics of situations and adjusting to unexpected change.
Flexible to shift strategies and accept other viewpoints and able to overcome disappointments and learn from the setbacks to bounce up.
Able to analyse and synthesise experience, observations and information to evaluate options with analytical and conceptual thinking and identify patterns & future possibilities with critical thinking.
Able to reconcile every detail and hence check results to ensure 100% accuracy has been achieved.
Able to keep going even in the face of set-backs and take up opportunities to show commitment to what has to be delivered.

Functional Competencies

Bachelor’s Degree in Computer Science, Statistics or equivalent .
3+ years’ experience in Data Warehouse design and ETL development, optimization and maintenance.
Solid background in business intelligence and data analysis methodologies.
Experience in dimensional modelling and two-tier architecture.
Well-versed in quality assurance principles and other engineering best practices (like industrial standards of naming conventions, detailed documentation of technical and non-technical aspects of the projects executed, etc.).
Experience in handling and optimizing data extraction from a variety of data sources
Solid background in reporting and Data Analysis
Strong ability to communicate technical aspects to non-technical audience
Must be adaptable to change and the fast pace of a dynamic company
Fintech experience will be an added advantage

Technical Competencies

Deep knowledge and working experience in Python and Pandas in the context of distributed systems.
Ability to work in an environment that uses Unix-based services..
Proficiency in SQL is essential. (Postgres would be ideal)
Experience with building data pipelines using programming languages (preferably Python)
Experience with at least one cloud service provider (e.g. AWS preferably)
Technical exposure to client-server architecture
Experience in Amazon Redshift is advantageous
Well-versed in various ETL techniques, strategies and sourcing BI frameworks
Experience with various messaging systems, such as Kafka or RabbitMQ is an advantage
Experience in integrating APIs with ERP/CRM systems (like ODOO)
Knowledge in predictive modelling is an advantage
Subject matter expert on data warehouse set up, structure and modelling
Knowledge of best practices/methodologies in Data Warehousing and Multi-dimensional data modelling (OLAP)
Knowledge on data mining techniques is an added advantage.
Knowledge and experience of state-of-the-art startup-style software development methods: Github, Slack Overflow, JIRA, Bit-bucket etc

Responsibilities

Design, implement, or operate comprehensive data warehouse systems to balance optimization of data access with batch loading and resource utilization factors, according to customer requirements.
Create, design, implement and maintain the ETL pipelines
Work closely with our Data Engineers and Analysts in coming up with the ETL pipelines.
Perform root cause analysis, drill-down the problem to key issues and demonstrate multiple explanations & solutions.
Understand, map, integrate, and document complex data relationship and business rules
Familiar with data quality, cleaning and masking techniques using Python
Periodically verify the structure, accuracy, and quality of warehouse data.
Troubleshoot any issues relating to the ETL jobs and maintain a knowledge base capturing the solutions provided
Map data between source systems, data warehouse, and data marts.
Develop and maintain standards, such as organization nomen-clature, for designing data warehouse elements, such as data modelling objects, schemas and databases.
Prepare functional and technical documentation for the data warehouse.
Create supporting documentation, such as metadata and diagrams of entity relationships, business processes, and process flow.
Create plans, test files, and scripts for data warehouse testing, ranging from unit to integration testing aborate with business stakeholders, understand the requirements and implement them
Experience in software configuration management (preferably using GITHub)

Additional Responsibilities:

Think creatively: PEG is a place that is permanently pushing the boundaries within its industry. We are not looking for someone that will just estimate models, write analyses and code. We want someone that will extract insights and provide business recommendations.
Generate Impact: We want someone obsessed with impact. We don’t want a person who is just interested in finding patterns in the data. We want someone that understands how to turn those discoveries into value for our customers and our internal teams every day.

PEG is an equal opportunity employer committed to diversity. All qualified candidates regardless of age, sex, ethnicity, race and religion are encouraged to apply