Lead Software Engineer
BADRI Management Consultancy
We are seeking a highly skilled and experienced Software Engineer to join our dynamic team. The ideal candidate will have extensive experience in developing SaaS solutions or have been a senior active member of a team building such solutions designed around data. They must have hands-on experience in creating or using rule engines that fire both deterministic and non-deterministic rules and must have worked with large datasets (at least 20 million records), applying rules rather than just filtering data.
Additionally, the candidate should have experience with batch data uploads, including file-based uploads and data fetching from diverse sources, such as document databases and relational databases. Expertise in creating or using CI/CD pipelines to manage development and deployment processes is also essential.
Prior experience in the insurance industry will be a strong advantage.
Responsibilities:
- Design and implement scalable micro-services based SaaS architectures to support the development and deployment of solutions that involve complex rule engines and large data sets.
- Coordinate with cross-functional teams to define project requirements, timelines, and deliverables, particularly in the context of managing batch data uploads and rule-based processing of large datasets.
- Manage the end-to-end development and deployment process, including continuous integration and delivery (CI/CD) pipelines, ensuring smooth transition from concept to final release.
- Ensure the adoption of best practices in coding, testing, deployment, and handling of large datasets, while maintaining robust CI/CD processes.
- Utilize deep technical expertise to troubleshoot and resolve issues related to SaaS development, rule engine execution, and large-scale data processing.
- Lead or participate in requirements analysis specifically for applications involving deterministic and non-deterministic rule engines.
- Collaborate with internal teams to produce software designs and architectures that efficiently handle batch data processing and integrate with various data sources (document and relational databases).
- Write clean, scalable code using relevant programming languages, ensuring high performance when processing large datasets.
- Test and deploy applications and systems with a focus on maintaining continuous deployment pipelines and large-scale rule application.
- Revise, update, refactor, and debug code to enhance the efficiency of rule-based systems and data pipelines.
- Improve existing software with a focus on optimizing data processing, rule engine performance, and scalability.
- Develop comprehensive documentation throughout the SDLC, including documentation related to the backend, rule engine, data handling, and CI/CD practices.
- Serve as an expert on applications that involve large data sets and rule-based engines and provide technical support as needed.
Technical Expertise:
- Proven experience delivering project using MEAN stack, PostgreSQL, MongoDB, Mono repo, Microservices, Docker and REST APIs, HTML/CSS3
- Experience in developing SaaS applications on cloud platforms like Azure and AWS, including integration with large-scale data sources and rule-based processing
- Hands-on experience with CI/CD pipelines for SaaS applications, ensuring smooth deployment and continuous integration
- Understanding of Agile methodologies
- Ability to solve complex engineering problems, particularly in handling large datasets (50+ million records) and implementing rule engines for deterministic and non-deterministic rules.
Qualifications:
- Over 5 years of experience in the software development field.
- In-depth knowledge and experience in the insurance industry.
- Strong analytical and problem-solving skills.
- Excellent communication and interpersonal skills.
- Attention to details.
- Excellent troubleshooting and communication skills
- Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
Preferred Skills:
- Experience with agile development methodologies.
- Experience with batch data processing, including file-based uploads and data integration from document and relational databases
- Experience with Mono repo architecture for managing large-scale codebases
- Strong understanding of PostgreSQL and other databases for large data processing
- Knowledge of additional programming languages and frameworks.
- Familiarity with DevOps practices and tools.
How to apply
To apply for this job you need to authorize on our website. If you don't have an account yet, please register.
Post a resume