Today, software development is driving the world's most successful businesses. For example, what do we find in common between Amazon, Tesla and ING?
Research (DORA) has been conducting an exploration of development practices since 2013, which now lists more than 2,000 organizations and covers all areas of software application. The findings and research have been published in the book Accelerate: The Science of Devops.
This scientific study, based on raw data, shows a strong correlation between success (productivity, profitability, growth) and four key metrics:
The study itself, the KPIs as well as the underlying mechanisms have been described in depth in a book by Nicole Forsgren, Jez Humble, and Gene Kim, entitled Accelerate.
In our Lean practice, we have found real resonance in these four key metrics, which promote all rapid feedback loops and continuous improvement in the service of the customer and teams.
The main strength of this framework is the rigorous approach based on the statistics of the numerous companies that support it and which gives great legitimacy to convince decision makers.
Our application of this framework has allowed us to see its power, as a starting point for organizational changes. Several teams have successfully used it to involve stakeholders in discussions aimed at improving the development process. We have observed positive impacts on deployment times, their frequency and greater involvement of teams in running and monitoring their apps.
The difficult point is the implementation, which often requires challenging the organization in depth. KPIs don't necessarily provide tools to drive these changes, but the case study in Chapter 16 of Accelerate gives An analysis grid which is very useful. This diagram lists the practices to be adopted by teams, management, and management to improve culture, organization structure, innovation, deployment strategies, and flow improvement through Lean.
OUR POINT OF VIEW
Our advice, read Accelerate and test it in your context, set up the indicators and use the “Transformation” part as a guide to reach the next level.
HISTORY 2022
Trial - See the Tech Radar 2022
Several Scrum implementations have fallen into the same trap by focusing on user stories and leaving the notion of functionality in the background. Under these conditions, a team may regularly have to create a series of user stories, without having an overall vision of the subject in which they are part. The results of this bias are very harmful to productivity:
The core of functionality development is carried out during a sprint, but managing edge cases or certain sub-courses may require several sprints. This is due to a lack of common vision and alignment between product, design and tech. Indeed, PoS often lack the tech skills to specify functionality and find boundary cases alone.
In recent years, at BAM, we introduced a one-hour workshop at the launch of a feature. This workshop brings together all stakeholders to draw up a technical-functional diagram. With a visual language inspired by BPMN, we draw all user behaviors, as well as their technical impacts. This document then forms a common starting point for writing user stories.
A shared schema between all team members allows you to both:
The benefits are obvious:
For example, on one of our projects, this workshop allowed us to detect a functional misunderstanding that would have cost the team 5 weeks late, if it had been detected during the implementation.
OUR POINT OF VIEW
We use this approach for any complex functionality and we invite you to test it on all your projects.
HISTORY 2022
It's a new blip this year.
To ensure that a unit of code, such as a function or a class, works as intended, developers can write code that verifies its behavior; this is called unit testing. From these tests, we can measure what proportion of the functional code is tested, that is the test coverage.
Writing unit tests makes it possible to improve the quality of a project's code, to document and to facilitate changes to existing code. However, this can take time, especially when starting a new project, as well as for maintaining them throughout development.
At BAM, we encourage our engineering teams to write tests as development progresses. Some teams took on the challenge of testing 100% of the code base.
Here are our learnings:
Finally, testing 100% of the code opened up learning opportunities for us:
OUR POINT OF VIEW
We recommend that you try to 100% test your code on your medium or long term projects. Even if this strategy is not enough to guarantee quality, it allows teams to be effectively trained in good development practices.
HISTORY 2022
It's a new blip this year.
How do you know if your app is performing well? This question is complex for several reasons, in particular because various metrics come into play: FPS, CPU, TTI, RAM...
Flashlight aims to be an answer to this question (disclaimer: it is a tool that we are developing). Flashlight gives your app a performance score. A bit like Lighthouse but for Android apps (iOS is not supported). With Flashlight, no setup in the app is required to measure its performance. Thus, even production apps are supported, regardless of their technology (native, React Native, Flutter...)
However, Flashlight cannot explore the app alone at this point. By default, it will easily deliver a score that concerns the start of the app, but to go further, you will need to share your own e2e tests with it. Flashlight will launch these tests several times, aggregate various key metrics, and average the results in a performance report.
If you don't have e2e tests, taking advantage of Flashlight may be more complicated, but we recommend using Maestro to set them up quickly.
The core of Flashlight is open source. So, you can use it on your own device, but a cloud version that runs on a low-end Android device is available in beta, to simulate the reality of the market. In just a few clicks, Flashlight Cloud gives your app a performance score and can be integrated into a CI to retrieve this score regularly. The duration of the tests can be long (more than 10 min), so we do not recommend integrating it into each pull request, but rather to check the evolution of the score at startup and some critical paths every week or before each release.
We also invite you to use Flashlight to assess the impact of major technological decisions, or as an indicator to help you improve performance.
HISTORY 2022
It's a new blip this year.
This development assistant allows you to write code more quickly, by offering autocompletion with code snippets ready to be used.
Based on OpenAI Codex, it analyzes the comments and the code to suggest an implementation.
We've recently seen strong emulation around generative AI, especially with ChatGPT. Copilot is part of this wave.
The use of Copilot is very effective in some cases, including:
Since its inception, we have been evaluating its impacts on the daily lives of developers and drawing several lessons from it. This product ensures high user retention. Of the developers surveyed, 80% gave an NPS score of 9 or higher out of 10. Among the feedback obtained, we note a saving of time on recurring tasks. 72% of the interviewees gave a score of 5/5 to the question: “Do you find that Copilot helps you write more quickly?” There are also 3 positive effects on learning:
The slowdown in the formation of intermediate profiles is observed because it is easier to obtain a functional code without real understanding. Despite this, we believe in the positive impacts of Copilot on productivity. Risks can be effectively managed with the help of a Tech Leader.
OUR POINT OF VIEW
We recommend that you explore the use of Copilot in your organization. It is important to note that this dynamic sector has several competitors, such as Tabnine, that are also worth evaluating.
HISTORY 2022
Assess - See the Tech Radar 2022
We mentioned Appium in our previous Tech Radar as an e2e testing framework. Maestro is presented as a new alternative solution that takes advantage of the learning of its predecessors. And contrary to a simple promise, Maestro keeps the major advantages of Appium without its drawbacks. Indeed, unlike Appium, Maestro's documentation is excellent, which makes it possible to create a fairly thorough e2e test in less than 30 minutes. The API is well thought out and provides the basic functionalities essential for writing e2e tests (scroll, click, wait for an element to appear). In addition, Maestro works (like Appium) in black box mode. In other words, you can launch a Maestro test script on any app, without any prior setup. This includes your release candidate or even your production app from the stores.
Maestro also has a version Cloud, which allows you to quickly and simply have a CI with tests, screen recordings and logs
Maestro is still young, and we have sometimes encountered some bugs in some versions, as well as delays on iOS. Maestro is also less easily extensible than Appium to add behaviors that are out of the ordinary (like launching a command). ADB or simulate an ultra-fast scroll).
OUR POINT OF VIEW
Maestro could quickly become the market standard as the team is constantly adding new features that revolutionize test writing. Our favorite novelty? ++code>Maestro Studio++/code> which almost allows us to write tests for us! We therefore recommend using it if you do not already have e2e tests set up. In fact, this is the solution recommended by Flashlight to perform E2E performance tests.
HISTORY 2022
It's a new blip this year.
Our QRQC post-mortem approach allowed us to see that bugs are detected far too late. Problems that are easy to detect are addressed in the validation process and QA, but formative defects are often discovered after a few months or years. This limits opportunities for learning and improvement.
We were inspired by the book The Toyota Way of Dantotsu Radical Quality Improvement by Sadao Nomura. It offers a classification of faults according to the stage of their detection ranging from A (detected during a task) to D (detected by a user in production).
While we only focused our QRQC approach on type D defects, the approach Dantotsu proposes to react to the faults detected at each stage: the later it is detected in the flow, the more expensive it becomes to correct it and to prevent the bad practice from spreading in the code.
We adopted this classification and experimented with a type A defect analysis approach, which we called “Right First Time”. It consists of 3 key steps:
If the code does not work the first time, it is a type A defect. We note this defect and the number of tests required to validate the expected functional needs. At the end of the ticket, the person who implemented it identifies the lessons to be learned (e.g. by analyzing the erroneous assumptions of the initial strategy).
The initial results are encouraging:
The main limitation observed is the extreme rigor required to apply this method, which complicates its daily application by teams. We are studying how to simplify it and what supports to create to facilitate its adoption.
OUR POINT OF VIEW
Despite these obstacles, we encourage you to conduct tests at home on Type A defects and observe the discussions and lessons that this generates. To learn more about this method, you can read Sadao Nomura's book and watch the Lecture by Fabrice Bernhard.
HISTORY 2022
It's a new blip this year.
Creating animations only with code is a challenge for mobile developers, due to the complexity of synchronizing multiple elements and managing a large volume of code. The variety of platforms and frameworks also makes it difficult to create consistent and reusable animations. Furthermore, most existing solutions do not allow for direct interaction with animations via code, making it impossible to react to user interactions and possible changes in the state of the application.
Rive is a solution that meets these needs and simplifies the integration of animations for developers, compatible with most frameworks and platforms. In addition, the editor offers an intuitive user interface and supports the import of animations from other software such as Lottie.
Animations created with Rive can react to clicks, movements, or status changes based on the data received, providing a dynamic and immersive user experience. Another advantage of Rive is the optimization of the size of animations, which can be 10 times lighter than a Lottie file, guaranteeing improved performance for mobile applications.
The Rive tool is positioned as a solid competitor against other animation creation solutions thanks to its better performances and more developed APIs. Since the renderer and the readers are open source, they offer increased flexibility and adaptability to meet specific project needs.
OUR POINT OF VIEW
Despite some financial and technical aspects of using Rive, such as the subscription of $14 per user per month and compatibility with at least iOS 14 for native applications, we recommend using Rive to create high-quality interactive animations.
HISTORY 2022
It's a new blip this year.
Generative AIs have seen increasing adoption in recent months. Leading the pack: ChatGPT. There are many use cases, the most common on the Internet are the following: co-creation of marketing articles, automatic email writing and synthesis, or even code creation. ChatGPT is not strictly speaking a code tool, there are models specially designed for this:
However, shared successes are subject to selection bias: we see a few impressive results, but we don't necessarily see all the failures. In our tests, we have observed that several solutions are simply not functional, or that they contain subtle bugs. Any solution given by ChatGPT must therefore be reviewed and validated carefully. Not to mention the relevance of the answer, since the current codebase context is not necessarily easy to include. Productivity improvement is therefore limited to very specific use cases. The cases that are well suited to this are either standard solutions in a technology (login form, centered image) or high-level questions about the practices adopted by the community.
We believe that these problems will eventually be resolved as the ecosystem evolves rapidly. For example, some solutions could compose ChatGPT with AI specialized in code or add automatic verification systems. The use of tools such as ChatGPT is therefore surely a trend that should be carefully observed.
OUR POINT OF VIEW
In the meantime, we advise you to seek to strengthen your skills in Prompt Engineering and Fine-Tuning, the two key skills to take full advantage of this AI wave.
HISTORY 2022
It's a new blip this year.
In 2022, we recorded 6.6 billion subscriptions to mobile telephone networks. Over the same period, A third of businesses experienced significant downtime or data loss due to a compromise involving a mobile device. Faced with this observation, it is essential for mobile application developers to integrate the security of their apps right from the design phase.
Last year, we introduced CVSS, a standard that rates vulnerability severity on a scale of 0 to 10. However, this standard does not provide guidance on what to look for. Without a solid background in security, it's hard to know where to start.
The OWASP Mobile Application Security Verification Standard (MASVS) is a security standard for mobile applications. Its version 1.5 is divided into 2 levels, each covering increasingly advanced security controls:
These levels can be complemented by a set of controls (Level R) aimed at increasing resilience to reverse engineering, manipulating, and modulating your application. 4 security levels are thus obtained: MASVS-L1, MASVS-L1+R, MASVS-L2 and MASVS-L2+R.
They include a total of 84 security checkpoints that touch different areas, such as data storage, authentication, and network communications. For each checkpoint, guides and tools are offered to explore and correct possible security breaches. Currently being deployed, version 2.0 simplifies checkpoints and aligns the definition of security levels with the OSCAL format.
OUR POINT OF VIEW
We introduced MASVS v1.5 through a project where security is a major issue. In particular, this solution makes it possible to better collaborate with security auditors. This test convinced us of the relevance of the standard and we are in the process of deploying it on a larger scale in order to strengthen the security of all of our applications.
HISTORY 2022
It's a new blip this year.
Until a few years ago, no-code mobile was a false lead. Very often web-based, the solutions left something to be desired in terms of quality and performance. They were especially appropriate for POCs or internal tools with low stakes where the user experience was not the priority.
The ecosystem has recently made significant progress with new tools based on Flutter (such as FlutterFlow) or React Native (like Draftbit).
Even though they are not yet very mature, we have already had very positive experiences with, compared to the previous generation. The interfaces offer sufficient flexibility for standard use cases and the possibility of exporting the code makes it possible to avoid lock-in.
OUR POINT OF VIEW
Having alternatives to solutions based on WebViews makes it possible to better meet user expectations. We recommend that you take a look at them and try them for low-budget projects, it's worth the trip.
HISTORY 2022
It's a new blip this year.
In the process of developing a feature, proofreading the code is a key step in ensuring its quality and compliance with the team's standards.
However, adding a control step for each review blocks the developer and impairs productivity. To avoid this, Rouan Wilsenach proposes (on This blog) an approach called “Ship, Show and Ask.” For each change, it consists in choosing between these three options:
Recently, we tested this approach on three projects with experienced teams. It has reduced the time it takes to review pull-requests, to promote deeper reflection on the meaning of each pull-request by returning responsibility to the developer, and to seek consensus on the solution before coding rather than after. Systematic Ask review requests have fallen to nearly 50% since this implementation. We did not see a decrease in the quality of the code.
This experiment is limited by the lack of perspective due to its recent use at BAM, the already high quality of the code and technical design as well as the maturity of the tools (CI, unit tests) put in place.
OUR POINT OF VIEW
Code review is an interesting step to align the team with the code to be provided. On the other hand, we encourage you to think about quality holistically and select the tools (code review, tests, continuous integration, design, pair programming, ensemble programming, etc.) that are best able to guarantee the quality of each change according to the maturity of your teams and processes.
HISTORY 2022
It's a new blip this year.
In the development cycle of a feature, the proofreading stage (or code review) is an essential stage. It ensures that the product code is of good quality, and that it complies with the team's standards.
A common drift is to take this standardization to the extreme, and to establish an ever longer list of rules to respect. Developers will end up no longer reading the rules, automatically checking the boxes and no longer thinking about the impacts of their code.
To avoid this drift, we tried to replace the list of rules with a “test plan” section in which developers should describe the tests they performed to validate their code. This section is inspired by the “test plan” practices popularized by Meta to contribute to open source projects.
Since the “test plan” section is free, developers can describe what makes the most sense in the context of the project and the ticket: in practice, we have seen, for example, PostgreSQL query plan studies to ensure the performance of indexes, tests of the accessibility of a mobile page to a screen reader, or even web page performance tests. This creates interesting discussions between the developer and the reviewer during the code review stage on the one hand, and allows you to learn more during a post-mortem on a bug (QRQC) on the other hand.
But, like the documentation, the “test plan” section requires additional effort and rigor. Over time, the quality of this section has become more and more variable on our projects.
OUR POINT OF VIEW
However, we think that this section is a good compromise between the list of mandatory rules and the absence of a “quality check” documented in the pull-request. We therefore recommend that you try this approach on your projects, and adapt it to your needs.
It's a new blip this year.
Retrouvez l'avis de nos experts sur les techniques, plateformes, outils, langages et frameworks associés aux principales technologies mobiles que nous utilisons au quotidien chez BAM : React Native, Flutter et Native.