Engineering Archives - Tala Giving credit where it’s due Thu, 14 Aug 2025 17:20:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://tala.co/wp-content/uploads/2021/10/cropped-tala-favicon.png?w=32 Engineering Archives - Tala 32 32 152906577 The AI Paradigm Shift Changing Software Engineering https://tala.co/blog/2025/08/14/the-ai-paradigm-shift-changing-software-engineering/ Thu, 14 Aug 2025 17:19:38 +0000 https://tala.co/?p=9993 "Regardless of where they are in their journey, any company utilizing AI has realized that software engineers are critical in the process."

The post The AI Paradigm Shift Changing Software Engineering appeared first on Tala.

]]>

By Shon Shampain, Director of Mobile Engineering

It’s fair to say that the introduction of the tractor as a farming tool in 1892 absolutely transformed the industry, with its output increasing exponentially. In a virtual instant, manual plowing was relegated to history.

This is very similar to what’s happening in code development given the advent of AI as the days of manually coding are waning. Engineers are now being asked to perform in a different manner that supports a higher output.

While there’s a lot being said about what’s happening in software engineering with respect to AI, it’s important to note that there’s a lot of misinformation circulating.

The Current State of AI Development

On the one hand, there are companies operating software without any AI involvement. In my opinion, they really can’t be expected to survive long term unless they are incredibly niche applications, which is very similar to boutique farming these days.

Then, there are companies doing the lion’s share of their development with AI. These early adopters have figured out the procedures necessary to support their development chain.

Left in the middle are the vast majority of companies that are still figuring things out, and introducing AI as best they can. Regardless of where they are in their journey, any company utilizing AI has realized that software engineers are critical in the process.

Up until a point, there will always be a developer calling the shots. They are the driving force behind the technology, so it’s critical to understand that using AI effectively involves creating leverage for a single developer to be able to create multiples of their previous output. This kind of leverage doesn’t really have a cap at the moment, and it’s why you hear that some companies are no longer hiring junior developers—because the seniors have found the leverage.

The senior staff that remain will have evolved to a new way of operation. As philosopher Ken Wilbur puts it, you “transcend and include.” You transcend to a higher level, but this higher level includes the core of your being that got you here in the first place. It has to be this way. Without core engineering skills, there is nothing to evolve around.

A New Chapter in Software Engineering

The new paradigm is one where most of the work is spent in one of the three main aspects of AI usage, those being: selecting the correct engine, managing the context, and working with the prompt. Gone are the long hours of carefully crafting meticulous code. Sure, some small amount of coding will still be required, but it is less and less every week because the work is one step removed.

Previously, code was generated directly. Now, code is generated from prompts that are generated directly. In a very real sense it’s similar to going from a coding role to an architecture role.

We learned quickly at Tala that sharing information regularly provides the best return on investment with regard to getting everyone up to speed. Engineers tend to like to solve problems by themselves and present work when it is known to be correct, but what’s interesting about AI is that the real speed comes when it’s collaborative.

To advance our knowledge on prompt engineering, we first had to set up the right infrastructure. We created a repository for prompts that are indexed to feature stories, a documentation page detailing meta prompting techniques, and a rich company culture that celebrates sharing both our victories in our AI journey, but more importantly the struggles.

In our sessions where we share our experiences, it’s inevitable that two or more people come across the same struggle, find the same answer, or decide to pursue different angles. Sometimes we’re able to find improvements in our process during unexpected moments that tend to cluster together.

A great example of this came from taking Figma design directly to Kotlin code. Our group was hand coding UI up to a particular point in time before some team members discovered tools that could automate the process while maintaining our coding standards and design guidelines. Almost overnight, we were able to cut down the time it takes to complete a high quality UI story––a once major task—by 50-75%.

The Quest for the Whole Codebase & the Mental Hurdle of Prompt Management

Working in a corporate environment generally involves a static set of AI engines to choose from. This means that selecting the proper engine is usually a one time effort, or at least a task that doesn’t happen very often. The real challenges for the AI engineer have to do with managing the context and working the prompt.

The context and how to manage it are an aspect of the game that has to be continually monitored. A public facing LLM has an incentive to encourage limiting the context as much as possible, both in terms of reducing the computation necessary in any query and to mitigate data privacy issues as much as possible. This is in direct conflict with most users who want AI to have as much context as possible.

The holy grail is getting the whole codebase into the context, as well as all relevant business documents from Slack conversations to design documents and feature requirements. Needless to say, most companies are still quite a far ways off.

Our efforts to get our whole codebase into the context has been limited by the significant discrepancy between current content limitations (typically a few thousand tokens) and the size of our codebase (often hundreds of thousands of lines of code).

Some vendors advertise complete project awareness, but the numbers suggest they must be cutting corners by swapping things in and out of context space. While this might work in some instances, we’ve seen subpar results in others.

In response, we’ve pursued a more programmatic approach where we control the loading of the context as we navigate the codebase. It’s been a game changer for tasks like refactoring, sweeping for specific coding patterns, or upgrading coding standards across all files, but loading files and information into the context is still a time consuming, manual process.

Similarly, prompts are an ongoing situation that developers have to deal with. The end goal is always to stay with a prompt (repeat the question, modified, over and over) until you get the exact output you want, then share it with your team so it can be re-used. This is one of the biggest challenges companies face.

Fortunately, it is more of a mental hurdle than a technological limitation. To overcome it, developers need to be encouraged to fully make the change away from analog coding and jump aboard AI development. Every time a developer stops working with a prompt and fills in the details of some code by hand, the process stagnates.

The Goals Ahead

AI is also helping the Mobile group in other ways that are not specifically related to the direct coding experience. For example, we are hooking up actions on GitHub to have AI perform code reviews after our developers share their thoughts. This is both to double check that we haven’t missed anything, and to ensure that our coding standards and architectural patterns are adhered to.

Our ultimate goal in the Mobile group is to set up an expert system where we can link documentation, Slack threads, feature stories and the codebase all together. Due to the limitations on loading context with current AI engines, this likely involves us creating a custom indexing mechanism among the data that we consider relevant to the question at hand, and then interfacing with AI properly through an API call. It’s ambitious, but being able to link all aspects of business data together would be a game changing development.

The post The AI Paradigm Shift Changing Software Engineering appeared first on Tala.

]]>
9993
ML engineering excellence fueling Tala’s innovation https://tala.co/blog/2024/08/07/ml-engineering/ Wed, 07 Aug 2024 15:50:00 +0000 https://tala.co/?p=9163 Tala's novel machine-learning approaches continuously unlock new, better ways to deliver credit.

The post ML engineering excellence fueling Tala’s innovation appeared first on Tala.

]]>

By: Anna Muszkiewicz, Senior Data Scientist

At Tala, we use novel machine-learning approaches to provide access to credit for the Global Majority. We serve our customers best by continuously unlocking new, better ways to deliver credit.

The data science team is a driving force behind Tala’s rapid innovation. To ensure we can innovate without being slowed down by maintenance of existing pipelines, we have incorporated software development best practices into our data science workflow seamlessly for our team. As a result, our data scientists create and maintain production-ready ML pipelines with minimal overhead. With a single command, we set up automated testing and deployments via CI/CD workflows. This safeguards our pipelines from degradation, enables routine model re-training with new data to improve model performance, and allows us to focus on novel research and innovation.

Here’s how we arrived at this data-science-friendly solution for integrating software development best practices into our data science workflow.

Seamless CI/CD workflows for data scientists

We are using continuous integration and continuous delivery/ deployment because they help us ensure a seamless customer experience and allow us to deliver a robust customer journey. 

To promote reusability and shorten development cycles, we share code where possible. This leads to an old software development problem: how do we safeguard against software regressions, or situations where previously-working code stops working? A standard way of preventing regressions is to use automated tests, so that’s what we set out to do.

Our ML training pipelines are developed by data scientists; therefore, tests should complement our development process. Our requirements are:

  • The testing framework should meet software development rigor while being data-science-friendly, allowing data scientists to use CI/CD workflows without having to debug them or write them from scratch.
  • During development, we want to have flexibility to run the tests locally and on a Kubernetes cluster. This is to ensure that the pipelines can run successfully in production.
  • By design, our training pipelines follow a “configuration as code” approach to support scalability across markets. In a similar fashion, we expect to easily toggle between markets when running tests.

Our setup

We build our batch ML training and inference pipelines using the following tools:

  • A proprietary feature store solution to ensure consistent feature definitions across pipelines, as well as between model training and serving
  • Industry-standard ML and AI libraries
  • GitHub Actions CI/CD workflows to automatically run tests on relevant pull requests
  • A custom Python tool to generate CI/CD workflow files that uses a single command to create GitHub Actions workflows, requirement files, and Dockerfiles
  • Custom docker images to ensure the environment is the same during model development and full-scale model training

Our solution

CI/CD jobs flag and prevent incoming regressions 

The goal of tests is to safeguard against software regressions. For every model in production or close to deployment, tests are automatically triggered on relevant pull requests as part of the CI/CD workflow for each pipeline. The tests run in parallel across multiple pipelines simultaneously. For any single pipeline, the tests also run in parallel across markets. We have the flexibility to choose whether to run the tests locally (on a GitHub runner) or on a Kubernetes cluster.

Image 1. CI/CD workflows support parallel tests over multiple batch training pipelines simultaneously. Tests are triggered on particular events (such as relevant pull requests).
Image 2.1. For every pipeline, tests run in parallel across markets. For a pipeline under active development, we optionally build a custom Docker image. Docker images are later retrieved for testing and full-scale training.
Image 2.2. For every pipeline, tests run in parallel across markets. Here, we show the testing process for a pipeline whose corresponding model is already in production. One difference compared to pipelines under active development (as shown in Image 2.1 above) is the lack of the Docker image-building step.
Image 3. For each market, we have an option to run tests locally or on a Kubernetes cluster. Here we zoom in on a test for one market. 

We use custom Docker images to ensure the environment is the same during both model development and full-scale model training. To test the pipeline for a corresponding model that is already in production, the pre-built Docker image is retrieved for testing purposes during the CI/CD workflow. For a pipeline under active development, the Docker image is built as part of the CI/CD job itself. Once built, the image is then retrieved and used for both testing and full-scale training in the future.

What we do (and don’t) test

At Tala, our libraries and proprietary feature store are already extensively tested, so we don’t test them here. When writing tests for our batch training/inference pipelines, we prioritize coverage and reproducibility. The tests that data scientists write during pipeline development include:  

  1. Integration tests to ensure that pipelines run successfully in 10 minutes or less, using a minimal dataset, a low number of iterations, and a small set of hyperparameters.
  2. Tests safeguarding against software regressions. For this purpose, a small and fixed dataset can be used to train a model predicting close to random. The test asserts that model predictions match expected values.
  3. Anything that a data scientist deems important. For instance, while refactoring an older pipeline, a test asserting that the new pipeline generates the same predictions is useful.

How this enhances the work of data scientists

From a data scientist’s perspective, writing tests and having them triggered on relevant pull requests ensures existing pipelines don’t break, and model re-training is truly a push-button exercise. Reducing toil frees up time for innovation, and ensures our customers continue to get the products and services they need to manage their financial lives.

Integrating software engineering best practices into a data scientist’s workflow has other advantages. Contrary to popular belief, testing doesn’t prolong development time, but rather accelerates it by ensuring a set of passing tests regardless of code or dependency changes.

To facilitate adoption of best practices on our team, we have implemented regular pairing sessions and provided guidance and resources, including documentation and video recordings. Data scientists at Tala have also adopted a “deliver then iterate” approach, writing the end-to-end pipeline with rudimentary data ingestion, model training and evaluation elements, and only then refining the individual components. This iterative approach permits us to uncover and address any would-be blockers early in the model development life cycle.

Equally important, data scientists are not expected to become engineering experts. This is why we have automated the generation of GitHub Actions workflows, requirements files, and Dockerfiles. The intention is that data scientists rarely have to look under the hood to debug these. As a result, sporadic support from a single machine learning engineer is sufficient to keep the process going.

Data scientists leverage ML engineering excellence to drive innovation

By integrating software development best practices into the data science workflow, data scientists at Tala can maximize time spent on innovation and rapidly bring proven, novel ideas to market. Data scientists build and maintain production-ready ML pipelines, write automated tests, and create CI/CD workflows that trigger on relevant pull requests. This safeguards our codebase from software regressions. As a result, ML batch training and inference pipelines are tested and testable. This allows data scientists to focus on innovation, and ultimately enables  Tala to relentlessly focus on serving our customers better and better, every day.

The post ML engineering excellence fueling Tala’s innovation appeared first on Tala.

]]>
9163
There’s no “I” in team: Reframing the Agile process https://tala.co/blog/2023/07/05/theres-no-i-in-team-reframing-the-agile-process/ Wed, 05 Jul 2023 13:00:00 +0000 https://tala.co/?p=7274 Reframe the Agile process to enhance team collaboration.

The post There’s no “I” in team: Reframing the Agile process appeared first on Tala.

]]>
This series takes a deeper look at how our engineers build digital financial services. Learn more about how they address large-scale technical challenges at Tala.


By: M. Silva, Lead Android Engineer 

In software engineering, technical aspects often overshadow the equally important aspect of collaboration. Specifically, the way we work together within our teams. This is something we take for granted.

While many will tell you they practice Agile Software Development or Scrum, the true value proposition has been lost on many individuals. Let’s pause and think critically about some of the processes that can be taken for granted in our day-to-day work and provide perspective on how we can utilize them more effectively. 

The Manifesto for Agile Software Development

I’m fairly certain most software developers have heard of the Agile Manifesto at some point. It has had a discernible impact on the way we conduct our work. I can also assume we’ve all participated in Scrum at some point, right? That’s how prolific it is. We can probably name all of the ceremonies (stand-up, refinement, planning, demo, etc.), but fewer can name the four Agile values or any of the twelve principles of Agile Software Development. I would like to focus on a few particular points that get lost in the mix of our daily work:

  • Prioritize individuals and interactions over processes and tools.
  • Business people and developers must work together daily throughout the project. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Does your team practice “agile” and do you find that they practice it with the values and principles I’ve listed above? If the answer is yes, that’s wonderful! Otherwise, let’s look at why you stand to benefit from it.

Unlocking team potential through thoughtful collaboration

People > processes

When I first started my software development career, I wanted the ability to build whatever I could imagine. I presumed if I was great at writing software, then I could reach that goal. Over time I realized no matter how good I was at writing software there were many other factors that could affect my ability to build things in the workplace. Ultimately, the people I worked with every day were the most influential in the outcomes of my daily and long-term work. By being thoughtful about how we work together — beyond just the rote processes in place — we can expand our impact. 

Leverage real-time collaboration to drive success

Many teams try to parallelize work to deliver software quickly, which often leads to more churn. Rather than assigning individual tasks that may not align with the highest priority business goal, we should work on fewer things together in real-time to ensure higher quality. Segregating ourselves by discipline is unproductive, as all team members can provide valuable insight at any stage of the delivery process — the earlier the better to ensure our efforts are aligned. 

Utilize the power of stand-ups for tactical planning

The stand-up meeting is a good place to observe this behavior — they’re often used to report individual status to a product manager. However, this approach can be improved. Instead of just reporting status, the stand-up meeting should be viewed as a tactical meeting designed to assess the current situation and devise a plan to complete the next near-term goal. Much like a timeout in soccer, stand-ups are our window to decide what our next move is while surveying the current situation. Then, we can more effectively progress in the game and actually deliver the highest priority tasks in the sprint. 

Remember to retrospect with introspection

Retrospectives are often used as a venting session to express frustrations with, usually, something outside of the team’s control or to highlight some particular shortcoming during the sprint. However, a more effective approach is to assess the team’s processes and behaviors and experiment with alternatives to improve their overall success. It’s important to remember that change takes time and conditions can change, requiring adaptation. It is a continuous cycle. Effectively coping with change is true agility!

Embracing uncomfortable growth

The path to Agile excellence

What I’m suggesting is not revolutionary, but rather a change of perspective. Looking at things a bit differently than before highlights opportunities for improvement. I understand that working closely with others can be uncomfortable or stressful, and it’s okay to feel this way, but the best way to mitigate those feelings is to push through the discomfort. Besides, whoever said that being agile would be comfortable? It’s certainly not convenient. Agility is difficult and takes a good deal of intention and effort. It’s a skill like any other, and there’s sure to be some trial and error, so keep at it!

The post There’s no “I” in team: Reframing the Agile process appeared first on Tala.

]]>
7274
Accelerating Your Build Process: Strategies for Optimizing Gradle https://tala.co/blog/2023/04/25/accelerating-your-build-process-strategies-for-optimizing-gradle/ Tue, 25 Apr 2023 21:46:40 +0000 https://tala.co/?p=7003 Get the most out of Gradle in your daily builds with these tips.

The post Accelerating Your Build Process: Strategies for Optimizing Gradle appeared first on Tala.

]]>
This series takes a deeper look at how our engineers build digital financial services. Learn more about how they address large-scale technical challenges at Tala.


By: A. Abhishek, Android Lead Engineer

Long build times are something most developers despise as it breaks the flow of development and reduces developer productivity. Even small improvements in build speeds accumulated over dozens of developers over a period of time add up significantly. Prior to our Gradle optimization, engineers could watch 33 back-to-back soccer games in the amount of time they collectively run tests and builds every day. Even a sloth could travel across a soccer field approximately 114 times in the time our engineers were waiting for these tests to run.

With Gradle optimizations in place, however, we were able to save about 25 hours of CI time daily and 10 hours of cumulative developer time daily — that’s 9,100 hours a year.

Decreasing build time has been a focus for our Android team at Tala, given it is proven to increase developer productivity and unlock new efficiencies. Read on for seven ways we reduced build and test times.

1. Enable Build Cache

Gradle is incremental by default, which means it will not execute a task if its input and outputs have not changed since the last time it was executed. Most of the plugins we use have incremental tasks, and if we are writing our own tasks, we design them to be compatible with Gradle incremental builds.

Even so, incremental capabilities can only reduce the build time to an extent; when we switch branches or fetch a remote repository again (which CI does a lot), the tasks will be executed again. Enabling the build cache expedites this process. Gradle caches the output of tasks in the local .gradle directory, so when we want to execute a task, it will look for the output in the cache after the incremental check has failed. When a task is skipped due to the incremental feature, we see a UP-TO-DATE tag next to the task, and when the task output is fetched from the cache, we see a FROM-CACHE tag.

Caching is primarily beneficial in CI environments as subsequent builds can just get the output from the local cache instead of executing the task. At Tala, enabling this feature has brought our CI times down by an average of 50%.

To enable Gradle caching, add this line to your gradle.properties file:

org.gradle.caching=true

You can even go one step further and enable remote caching. Remote caching uses a cache in a server to store your task outputs which can be shared across machines. When one developer checks out a cached branch on their machine, they don’t have to build again. They can just download the output from the remote cache. This obviously is much more tedious to set up, but the nice people at Gradle will do it for you if you have Gradle Enterprise.

2. Go Parallel 

It is advised to use all your CPU cores by building parallelly to bring down build times, and Gradle has a feature where it can build modules in parallel.

To utilize this feature, your codebase must first be modularized. Properly modularizing a codebase is by no means a trivial task, and if done incorrectly can certainly hamper your build speeds. To effectively modularize your codebase, expand horizontally rather than vertically. This strategy will allow Gradle to build multiple modules parallelly. 

For example, your dependency graph should look more like this :

Rather than this:

To enable parallel builds, just add the line below to your gradle.properties file.

org.gradle.parallel=true

3. Reduce Build Variants (Android)

In any standard Android codebase, we will have a few variants to manage environments like QA, Prod, etc. All the combinations of build variants are then created for debug and release build types as well. So, using the above example, we will have four build variants qADebug, qArelease, prodDebug, and prodRelease. There are configuration costs for each of the build variants because Gradle will create separate tasks for each of them. If you think carefully, you do not need the qARelease variant because we will never release a QA environment build.

This configuration cost multiplies as we add more and more dimensions to the variants and can really add up in a project with lots of variants. Thankfully, the Android Gradle plugin provides an elegant way to disable the variants we do not need via the variantFilter block.

android.variantFilter { variant ->
   if (variant.getBuildType().name == 'release') {
       if (!variant.name.contains('Prod')) {
           variant.setIgnore(true)
       }
   }
}

Here we are disabling any release build types not in the Prod environment so that no tasks will be generated for them during the configuration phase, alleviating the project from creating any unnecessary tasks.

4. Running tests in parallel

Slow tests can also hamper developer productivity, and by default, the tests are not run in parallel. However, we can easily change that with a block of code in our build.gradle file:

test {
    maxParallelForks = Math.max(1,Runtime.runtime.availableProcessors().intdiv(2)) // half of CPUs but no fewer than 1
}

With this configuration, Gradle will spread your test execution to half of the available processors.

5. Keep Gradle and Plugins Up-To-Date

This is a no-brainer since the Gradle team keeps enhancing the build mechanism, improving the build speed significantly. We also keep updating the plugins we use because even the plugin maintainers actively convert their tasks to be incremental and cacheable. There is no point in enabling build caching if the plugins you use do not also have cacheable tasks!

6. Optimize Repositories

We tell Gradle where to look for our dependencies in the repositories{} block in the build.gradle file, but we also keep in mind that order matters. Gradle will go through your repositories one by one for each and every dependency defined. Therefore, be sure to order the repositories in descending order based on the number of dependencies they provide. For example, if I am fetching from three repositories: RepositoryA, RepositoryB and RepositoryC, and the number of dependencies they each provide are 3, 11, and 5 respectively, then our repositories block should look like this:

If you do not optimize this, then you will get a lot of HTTP errors while trying to fetch the dependencies the first time or whenever you update dependencies, eventually slowing the dependencies retrieval.

7. Configuration Caching

Configuration caching is a feature that can significantly improve build performance by caching the result of the configuration phase and reusing the output for subsequent builds. If there are no changes that can affect the configuration, Gradle has the ability to skip the configuration phase entirely.

We can enable configuration caching while building by adding the optional parameter, --configuration-cache:

gradlew assemble --configuration-cache

Or we can enable it in the gradle.properties file by adding:

org.gradle.unsafe.configuration-cache=true

Be mindful that this feature is still experimental and not yet compatible with many plugins. However, Gradle will fail the build if it realizes that it cannot cache the configuration phase rather than it silently failing.

Improving Build Speeds with Gradle

We’ve made significant strides to improve our build speeds with Gradle, due, in part, to these foundational tips. To actually see how much your builds have improved, check out Gradle Profiler. To analyze any shortcomings in your setup, use Gradle Build scans by adding --scan to your builds to generate an in-depth scan:

./gradlew assemble --scan

Improving build times for a project impacts the overall productivity of the developers and is something our Android team at Tala is working towards with utmost priority.

The post Accelerating Your Build Process: Strategies for Optimizing Gradle appeared first on Tala.

]]>
7003
Three Key Soft Skills for Engineers https://tala.co/blog/2023/04/12/three-key-soft-skills-for-engineers/ Wed, 12 Apr 2023 10:00:00 +0000 https://tala.co/?p=6978 Shon Shampain shares his advice for engineers wanting to build their career.

The post Three Key Soft Skills for Engineers appeared first on Tala.

]]>
This series takes a deeper look at how our engineers build digital financial services. Learn more about how they address large-scale technical challenges at Tala.


By: Shon Shampain, Senior Manager, Android

In the engineering field, many prioritize the development of hard skills over soft skills. However, it’s crucial to recognize that soft skills are just as valuable. Here are three guiding principles I recommend to all engineers wanting to grow their skills and build a more effective career.

Connections, Connections, Connections

A piece of advice I give to all engineers coming into the Android group here at Tala is the quality of their career trajectory will almost certainly be highly correlated with the quality of the relationships they cultivate. They say real estate is all about location, location, location; career development is all about connections. Nowhere is this more important for my team than with quality assurance (QA). In all respects, engineering and QA are the closest of cousins, both focusing on the quality of the code produced. The big difference in many cases is that engineering often focuses on a narrow aspect of a product feature while QA ensures integrity of the whole product.

Developing strong relationships is so important that I want to explain it a bit further. I find that the best foundation of a good work relationship is based on respect and empathy for my colleague’s job. In other words, as an engineer, I want to reach out to the tester I’m going to be working with and understand what their requirements are, how busy they are, how they go about the testing, and last but not least, how I can make their lives easier. Being conscientious about relationships with QA is vital because when push comes to shove and you have an issue, it’s QA — nine times out of ten times — who’s going to bail you out.

No One Has Time But We All Make Time

This skill is essential in the corporate world: time management. I believe it’s inaccurate to say, “Sorry, I didn’t have time.” Not only that, it’s a nonsensical concept since no one has time; we only make time for that which is important to us. If you don’t do something and are asked about it, the answer really is, “I chose not to make time for this.”

To support the effective usage of our precious time then, the first important question to your manager should be, “What’s most important for me to work on?” At Tala, we use a stack ranking to determine priority; and this has a very beneficial side effect — when the order changes and something gets inserted, something else has to drop.

Given a clear sense of what’s important, the next step is to make time for it. I suggest physically blocking out segments of your day on your calendar for the areas which are important. During these times, keep distractions to a minimum and focus only on the task at hand. In most cases, Slack can wait. Then repeat this exercise for your week. If you have more tasks than time slots, go to your manager and mention that this is what your work week supports and that the items not listed won’t be addressed unless priorities shift. This is a great chance for you to take control of your work/life balance and ensure agreement on priorities.

You Are the Captain of Your Destiny

The final soft skill I recommend: take control of your destiny and have frank discussions with your manager about your career trajectory. From experience, I can unequivocally state that if you don’t, you’re likely to get just that — a random, nebulous career trajectory that looks nothing like what you hope for.

The key for a quality career trajectory is alignment. Alignment means something very special. If you want to develop the skills to become an architect, and you talk to your manager, and there turns out to be an opportunity to pursue these skills in your group, you have alignment. The special something is that now you can be selfish in pursuing your goals because when you improve, the company likewise improves. When your problem is my problem, we move together in harmony.

If there are no opportunities for you to pursue your interests, then you probably have the wrong job. And if your manager is uninterested in helping you pursue your interests, then again, you probably have the wrong job.

At Tala, we value our people more than any other resource and go to great lengths to ensure they are happy, healthy, and find a compelling career trajectory. If this resonates with you, then check out our open postings. We’re hiring and you might just be a good fit.

The post Three Key Soft Skills for Engineers appeared first on Tala.

]]>
6978
Be the Code You Want to See: Domain-Specific Languages https://tala.co/blog/2023/03/28/be-the-code-you-want-to-see-domain-specific-languages/ Tue, 28 Mar 2023 20:52:25 +0000 https://tala.co/?p=6951 Learn how DSLs can be a powerful tool for software engineers to solve complex problems in specific domains.

The post Be the Code You Want to See: Domain-Specific Languages appeared first on Tala.

]]>
This series takes a deeper look at how our engineers build digital financial services. Learn more about how they address large-scale technical challenges at Tala.


By: M. Silva, Lead Android Engineer

As software systems become more complex, it becomes increasingly difficult to write and maintain code that is efficient, reliable, and scalable. And, as a byproduct, code duplication can undermine the efficacy of your code base. 

Let’s look at creating Retrofit instances as an example. In our code base, we had attempted to streamline this using abstract classes and other typical object-oriented techniques; the result was still verbose and duplicated code! So I put my thinking cap on and analyzed what was exactly the same and what was different between all of them. They all had the following things in common. They added a base URL, call adapter factory, converter factory, and an OkHttpClient instance. For the OkHttpClient instance, they all set timeouts, added a logging interceptor in debug mode, and added auth headers. After setting all of that configuration, the instance would be created and ready to use. It looked something like this:

val sessionInterceptor = Interceptor { chain ->
    val request = chain.request().newBuilder()
        .addHeader(HEADER_SESSION_ID, "session-id")
        .addHeader(HEADER_USER_ID, "user-id")
        .build()
    chain.proceed(request)
}
val builder = OkHttpClient.Builder()
    .retryOnConnectionFailure(true)
    .addInterceptor(sessionInterceptor)
    .readTimeout(DEFAULT_READ_TIMEOUT_SECONDS, TimeUnit.SECONDS)
    .connectTimeout(DEFAULT_CONNECTION_TIMEOUT_SECONDS, TimeUnit.SECONDS)
if (BuildConfig.DEBUG) {
    builder.addInterceptor(
        HttpLoggingInterceptor()
            .setLevel(HttpLoggingInterceptor.Level.BODY)
    )
}
val okHttpClient = builder.build()
val api: Api = Retrofit.Builder()
    .baseUrl("https://0.0.0.0")
    .addConverterFactory(GsonConverterFactory.create())
    .addCallAdapterFactory(RxJava2CallAdapterFactory.create())
    .client(okHttpClient)
    .build()
    .create(Api::class.java)

I think it’s safe to say that’s a lot of boilerplate and ceremony. In our application, this would be then replicated, to some degree, for each backend service that we rely on, which is quite a few. 

I wanted a solution that would reduce code duplication, easily reuse common settings and easily override the common settings. After some experimentation, I decided that a domain-specific language (DSL) might be a good fit for solving this problem. 

The Benefits of Domain-Specific Language 

If you haven’t heard the term before, you’ve probably seen the technique used in some capacity. Wikipedia describes a DSL as a language created specifically to solve problems in a particular domain or business area and not intended to solve problems outside of it. For reference, if you’re a Java/Kotlin/Android developer using Gradle as a build tool, you’ll have used DSLs when configuring most plug-ins.

DSLs have a number of benefits; they’re declarative, composable, and not concerned with implementation details. Because of this, they tend to be easier to read, and they tend to reduce the overall cognitive load on the writer as well as future readers of the code.

In Kotlin, DSLs are primarily achieved with extension functions — lambdas with receivers and some kind of builder pattern. Luckily for me, both Retrofit and OkHttp provide builders for their classes, so there wasn’t much to do there. All I needed was to decide what to include in my Retrofit/OkHttp DSL. After a few iterations attempting to clean up the above code, what I ended up with looked like this:

buildRetrofit {
  baseUrl("https://0.0.0.0")
  addGsonConverterFactory()
  addRxJava2CallAdapterFactory()
  setHttpClient {
    addRequestInterceptor { addSessionHeaders(sessionInfoProvider) }
    retryOnConnectionFailure(true)
    applyStandardTimeouts()
    if (BuildConfig.DEBUG) addLoggingInterceptor()
  }
} 

At a glance, it doesn’t seem like much, but you would probably agree that it is much more straightforward to parse through in this form. 

Let’s look at the signatures for the DSL methods used above.

inline fun <reified T> buildRetrofit(builder: Retrofit.Builder.() -> Unit): T

inline fun Retrofit.Builder.addGsonConverterFactory(
  builder: GsonBuilder.() -> Unit = {}
)

fun Retrofit.Builder.addRxJava2CallAdapterFactory()

inline fun Retrofit.Builder.setHttpClient(
    client: OkHttpClient = buildOkHttpClient(),
    builder: OkHttpClient.Builder.() -> Unit = {},
)

inline fun OkHttpClient.Builder.addRequestInterceptor(
    crossinline urlBuilder: HttpUrl.Builder.() -> Unit = {},
    crossinline requestBuilder: Request.Builder.() -> Unit = {},
)

fun OkHttpClient.Builder.applyStandardTimeouts(
  configure: TimeoutSettings.() -> Unit = {}
)

inline fun OkHttpClient.Builder.addLoggingInterceptor(
  builder: HttpLoggingInterceptor.() -> Unit = {}
)

In case it isn’t obvious how to use them by the signature alone, let’s quickly go through them one-by-one:

  • buildRetrofit lets you configure a Retrofit.Builder and then builds and creates an instance of type T.
  • addGsonConverterFactory does as its name implies and optionally allows configuration of the Gson instance it uses internally.
  • addRxJava2CallAdapterFactory does as its name implies, nothing more.
  • setHttpClient allows a pre-configured client to be passed in and further customized. If no client is passed in, a shared instance is provided.
  • addRequestInterceptor adds a single interceptor that allows modification of the request URL and/or the request itself. The first parameter allows the URL to be modified for things like adding query parameters. The second parameter allows for the request to be modified for things like adding headers.
  • applyStandardTimeouts sets timeout according to our applications convention. It uses a custom builder object with our conventional defaults set and the caller can change these values in the lambda if necessary, e.g., applyStandardTimeouts { writeTimeout = CustomTimeout(seconds = 15) }.
  • addLoggingInterceptor adds the logging interceptor with the logging level set to our application’s convention which can be changed in the lambda.

As you can see, most of them take a lambda using a builder as the receiver to allow further customization if necessary, but, since they use default parameters, they can be used without specifying the lambdas to get the most common configuration. You could even create new functions that combine several existing functions together if it would make sense to do so. This approach worked exceptionally for my use case because it was very easy to set up all Retrofit instances in virtually the same way with a few minor exceptions. It didn’t require any inheritance or some opaque mechanism to pre-configure settings. It also had the added benefit of limiting direct dependencies on OkHttp and Retrofit types by users of the DSL while still allowing for low-level customization if necessary.

DSLs offer a powerful tool for software engineers and developers to solve complex problems in specific domains. By providing a language tailored to a specific application, DSLs can help to reduce errors, increase productivity, and improve collaboration between domain experts and software developers through increased readability. With this technique, the set-up code reads the way that you want it to. You have total control over what it looks like and how it is used. Hopefully, with this case study, you are tempted to find patterns in your code base and write some DSLs of your own.

The post Be the Code You Want to See: Domain-Specific Languages appeared first on Tala.

]]>
6951
A/B Testing at Tala https://tala.co/blog/2023/03/22/a-b-testing-at-tala/ Wed, 22 Mar 2023 14:06:45 +0000 https://tala.co/?p=6934 At Tala, we care deeply about providing the best experience to our users. Learn how we use A/B tests to understand user behavior, build the right product, and provide the best value to our customers.

The post A/B Testing at Tala appeared first on Tala.

]]>
This series takes a deeper look at how our engineers build digital financial services. Learn more about how they address large-scale technical challenges at Tala.


By: Bibin Sebastian, Engineer Manager

Have you ever wondered how successful digital businesses understand their customer needs, why their website is designed a particular way, or how they make quick data-driven decisions? 

A/B testing is one of the most popular and effective techniques used to understand your customers at a scalable level, unlocking the insights necessary to provide the best experience and value to your customers.

What Is A/B Testing?

It is best to explain what an A/B test is with an example. 

Imagine that your team owns the homepage of an e-commerce website. Arguably, it is the most important page for your site as almost all the users land on this page first. The homepage has a search widget that needs a new UI design to improve searches on the site. You have to come up with a new design based on a hypothesis. Now, how do you know if your hypothesis is correct and will produce the expected results?

That’s where A/B tests come in — you can use an A/B test to test the new UI design on your real users and see what performs best. 

An A/B test consists of two versions: a control version and a variant version. Control represents the existing version. Variant is the new version you want to test. So, in this particular case, control represents the existing search widget (A) and the variant is the new search widget design (B) you want to test.

This image has an empty alt attribute; its file name is hWVVybCTKYkUnS2g4SbwqpklJmC8xN5zxMeiXn0keOCJX3IxXELZrMqRLpZ-enC1nXWYJ-GFj-Hvf6fCSWgvuESeSs99xISp0xki2Ejsq2N7wj1q29rjvcwrmm4SUPeoyxdk4I3eqeyyyouC_WP8rd0

For accurate results, A/B testing is limited to one variable. However, an extension of the A/B test is a multivariate test where you will have multiple variants to assess multiple variables through a series of combinations.

The idea behind A/B testing is to then display each version of the search widget to a separate cohort of your users. For instance, 50% of the users will get the existing version of the search widget (A), and the remaining 50% will get the new design (B). To identify the winner, measure and compare the number of searches from both these versions. 

The beauty of A/B testing is the efficient feedback on your hypothesis from your customers, eliminating any guesswork when it comes to validating your product and design choices. This is a data-driven approach, making it easy to measure and analyze the impact. 

How to Design A/B Tests?

A systematic approach to A/B testing ensures that you get the best results. It requires quite some preparation and groundwork to develop A/B test use cases that will get the best outcome. At a high level, there are three main stages of A/B testing.

1) Researching and Prioritizing the Hypothesis to be Tested

Typically you plan an A/B test when you want to improve on something about your business, so it is important to do some research on your business use cases, analyze and collect data points, and come up with a list of hypotheses to be tested, aligned with your business goals. When it comes to testing your hypotheses, it is best to prioritize the list and go with a step-by-step approach, testing one or a few hypotheses at a time. 

It is important to note that you have to identify a metric (or multiple metrics) to measure your test. In the above example of the search widget, the metric used is “number of searches” from the widget.

2) Executing the Tests

For each test hypothesis, you need to identify the control and variant versions. Then, you need to decide on the percentage split: what percentage of users should see control versus variant versions. You will then configure the percentage split and deploy your A/B test. The duration of the test is critical as a significant amount of data is needed to measure the test according to your metrics. The duration is calculated based on the number of active users, variants under test, and the percentage of users in each variant.

3) Measuring the Test Results and Deploying the Winner

Once you have enough data points, you can measure the metrics to see which version stands out in comparison, adopting the winner. If you are not satisfied with the results, use these learnings to formulate further A/B tests and get better results.

Engineering Considerations for A/B Testing 

If you are planning to embrace A/B testing practices, from an engineering standpoint, these are the two important factors to consider:

1) Tooling   

A good A/B testing tool is crucial for effective results. You can build a tool in-house or use one available on the market. Either way, make sure it has the following characteristics.

  • The ability to configure the tests: You should be able to configure the control and variant versions and input parameters to base the split percentage and the split percentage itself.
  • Ability to deploy tests quickly: Once you configure the tests in the tool, you’ll want it to be deployed for customer use. Hence, your tooling should easily integrate with your systems to get deployed to production environments quickly.
  • Measure the test results: Your tool should be able to capture the metrics and provide options for comparing the results.
  • Analyze the results: You should be able to analyze the past test results for continued learning.

2) Architecture Support for A/B Testing

To deploy A/B tests successfully in your organization, your software system architecture should have the necessary hooks built in. This is an architectural choice every organization has to make. A/B testing can be done at the application’s front end or in the back end. If you want to do front-end A/B testing, ensure the system which renders your front end is integrated with your testing tool; same thing with back end.

It is also a good practice to segregate which part of the application is testable and which is not. To return to the e-commerce application example, the homepage lands the most visitors, making it the most effective page for experimentation.

How Do You Implement A/B Tests in Your Code?

For the purposes of this explanation, let’s assume that the A/B testing tool you use provides a library or API for integration. So in simple terms, your code will look like this:

treatment = getTreatment()

if (treatment == "on") {
     // display variant version
 } else {
     // display control version
  }

function getTreatment () {
   // fetch the treatment from the A/B testing tool.
}

If you have configured a 50:50 split between the control and variant in your tool, the getTreatment() method will return the value “on” only for 50% of your base. So 50% of your users will see the variant version, and the rest will see the control version.

A/B Testing at Tala

At Tala, we care deeply about providing the best experience to our users. We run tests on the Tala Android mobile app to understand user behavior, build the right product, and provide the best value to our customers. 

Here are two big A/B tests we deployed in 2022. Following the best practices we’ve outlined, our team developed these tests after conducting robust research and analysis on our business metrics, brainstorming, and prioritizing to create the best value for our customers.

Installment MVP

Tala’s Installment MVP feature was a test to understand the adoption of installment products among our India customer base. Since our current loan offering only had a one-time repayment option (control version), we wanted to see how customers would respond to multiple payment options (aka installments) and see if this flexibility helps them to repay the loan on time.

We introduced two variants: variant 1 (with both installment and single payment options) and variant 2 (with only installment option). Variant 1 was decided to understand which option the customer would opt for when given a choice. Variant 2 was to understand if the customer would proceed with it if only given an installment option. We deployed the experiment with a 33% (control): 33% (Variant 1): 33% (Variant 2) split.

Laddering UI

For this A/B test, we wanted to understand the best way to inform customers they get approval for higher loan amounts when they pay on time. Our hypothesis was by notifying customers that paying on time can unlock higher value loans — and ladder up over time —  customers would be motivated to pay back loans early. The team had to do a user study, talk to some customers, and work on multiple UI designs to finalize the best way to message the customers: 

  • Control (with text under “Paying on time helps you” section) 
  • Variant (with slightly modified text and image under “Paying on time helps you” section)

A/B testing is a key part of Tala’s experimentation practice. We continue to conduct small and big tests to improve and optimize our app to serve our customers better every day. If you find this interesting and would like to partner and collaborate with us, we are always looking for curious and passionate minds to join us — we are hiring!

The post A/B Testing at Tala appeared first on Tala.

]]>
6934