Build and Improve Apps Using App Analytics

Build and Improve Apps Using App Analytics

app analyticsUsing app analytics helps developers find clues on how to build apps people want to use. App analytics go beyond basic measurements, such as visits, sessions, installs, and deletes, and instead show a range and combination of macroscopic and microscopic business, customer, and product trends that drive decision-making.

App analytics data dives deeper into how people feel about and engage with an app, to more quickly find solutions that solve customer problems. It enables measurable experimentation that uses qualitative and quantitative customer data to validate features, and measure overall progress.

To become effective at using app analytics, data needs to be set up, structured, and analyzed, in an ethical, logical, and digestible way. The setup should be easy to adjust as the software evolves, and a viable app solution emerges.

In this post, I will break down some common experimental methodologies and app analytics concepts that app entrepreneurs should understand, and give examples of free analytics tools that can be used to help you get started.

Already a Master App Developer?

Show us your development skills and apply to this year’s Shopify Commerce Awards!

If you’ve completed outstanding work in the past year, seize a chance to get recognized as an industry leader, and apply to one of our four app-specific categories.

Apply

Experimental methods

There are several methodologies out there that suggest measured experimentation, and can help developers understand what’s working well for their app and what needs tweaking and improvement. The following methods represent a cross-section across time, industry, and discipline. All three jointly suggest using experimentation using a data-driven feedback loop.

1. Plan, Do, Study, Act (Deming)

American statistician and engineer W. Edwards Deming created the Deming Cycle, which originally used “Plan, Do, Check, Act” as a method to improve a process, product, or service. He later replaced the word “Check” with “Study,” which highlights his focus on learning and experimentation. The W. Edwards Deming Institute website explains:

“His focus was on predicting the results of an improvement effort, studying the actual results, and comparing them to possibly revise the theory.”

2. Acquisition, Behavior, Outcome (Kaushik)

Avinash Kaushik, Digital Marketing Evangelist for Google, suggests his Digital Marketing and Measurement Model, which describes how to measure digital marketing efforts, and tie business objectives and goals to metrics that measure performance. Kaushik’s method reminds us to measure customer behavior performance against anticipated outcomes.

3. Build, Measure, Learn (Ries)

Eric Ries wrote the Lean Startup and combined four key disciplines: lean manufacturing, customer development, design thinking, and agile software development. A developer and entrepreneur himself, Ries and his co-founder discovered the art of building using experiments to validate assumptions and to determine what to build, which helped them spend less time developing and more time validating what should be built. Ries advocates for validation through experimentation, and for a rapid customer feedback and development loop, to more quickly determine if the customer demand leads to a sustainable software model.

Why these methods?

How we build apps evolves just as much as the technologies we use to build them. The Lean Startup was published in 2011, and may be remembered as bringing together four time-tested disciplines that created a more scientific way to build software. All this at a time when advances in cloud computing, architecture, and automation were advancing and being adopted at an accelerated pace. Web analytics were custom built and costly to maintain at the time, and cloud analytics services were just beginning to catch up when Ries released his book.

Since then, digital analytics and experimentation software has exploded, which has earned significant attention. Kaushik’s Digital Marketing and Measurement Model is the only clean framework that ties objectives, goals, and targets, to metrics, KPIs, and benchmarks that I’ve managed to find. Deming’s method has influenced Agile software development and is a testament to how long experimentation and continuous improvement has been around.

When these experimental methods together are applied to app development using analytics, the process is powerful, but much more involved. A few more steps are required than these easy-to-remember methods appear.

You might also like: Why You Should Build Apps That Address Merchant Pain Points.

Method breakdown

Here’s a more realistic view of how analytics processes might be used with an experimental Agile build process. In practice the process may look more like this:

  • Qualify — What to build using qualitative customer feedback.
  • Iterate — Release new features with data collection capabilities.
  • Quantify — What was built using aggregated quantitative usage data.
  • Verify— Assumptions by analyzing user performance metrics.
  • Adjust — Assumptions, questions, strategy, mindset.
  • Repeat.

      Qualify, iterate, quantify

      The first three steps to app experimentation are the most involved and require the bulk of the work. Qualify what app and features should be built using direct feedback from merchants and their customers. Use what you learn to enhance the next iteration of your product. Collect usage data to create concrete proof that the app is used as expected. Understanding this part of the experimentation process is important — here’s a deeper explanation of these steps.

      1. Qualify

      Qualify what to build. Qualitative data is my favorite kind of data. It connects you more directly to the human beings who use the software.

      Qualifying data gives a sense of clarity, context, and feeling that help make sense of why the app numbers are the way they are. This type of data can be collected through customer interviews, surveys, reviews, and support tickets. For example, here at Bold Commerce, we gather app-reviews that come in, bucket the feedback into groups, and analyze for trends.

      app analytics: qualify
      Spreadsheet example of review keyword analysis with buckets.

      We run analyses to find clues that point to what customers need. Qualitative data helps clarify and give context to quantitative customer data. It gives you insight about why the numbers that represent user feature behaviors look the way they do.

      2. Iterate

      Iterate on what was built. After you qualify what merchants want, it’s time to iterate by building the next increment of the app. Iterate refers to continuous timeboxed sprints dedicated to enhancing the product for the next round of feedback.

      It’s time-boxed to ensure teams don’t over-develop before getting feedback, which reduces investment costs. The shorter the timebox the faster teams learn if the software is on the right path or not. Building in small iterations saves time and money because it reduces the risk of developing features nobody wants or needs. Developing iteratively is a well known and widely accepted Lean and Agile software development concept.

      Analytics and event tracking is also added, updated, and tested during this stage to prepare data collection for new updates to be validated. When the product is released, the data collection process begins.

      3. Quantify

      Quantify what was built with usage data. Quantitative data collected from software usage is summarized and presented, and compared with how you thought users would react to the release. Your quantitative data will be unique to each app and its stage of development.

      At Bold, we track user and customer churn over a given period. Churn essentially compares customers at the beginning of a time period versus customers at the end of the same time period. It’s quantitative because it’s numeric in nature.

      Quantitative data, however, doesn’t stop at churn alone. Retention, bounce rates, and engagement are other examples of data measures that are used. Quantitative data is context-specific, so each team’s data metrics will vary.

      This data is analyzed for insights so teams can adjust their plans to continue building.

      Verify, adjust, repeat

      Verify. Once the qualify, build, quantify process is complete, analyze the results to verify if the app and its features are something customers value. The analysis may result in answering questions, such as:

      • Does the app solve problems that matter to users and customers?
      • Are there any feedback insights or behavioral patterns that give us clues that can help us delight customers in the next iteration?

      Adjust. Extract insights from what you verify works and what doesn’t work, and adjust your strategy, mindset, and possibly your whole direction as necessary.

      Repeat. Once you quantify your results, it’s time to repeat the process, go back to qualifying, and get insights into why customers used the software the way they did, what they value, and what other problems exist that you want to solve next.

      Metrics are used to determine if app performance is getting better or worse.

      What are metrics: Leading v.s. lagging

      Metrics are standard measures of data. They can be leading or lagging indicators of your app’s performance.

      Lagging measurements show what happened in the past. Leading measurements are more predictive. Great metrics are actionable, especially if you find causal ratios (two numbers that depend on each other). Ratios with a causal relationship allow teams to make more predictive app development decisions.

      One example of a metric is churn. Churn can be both a leading and a lagging metric. Usually churn is calculated at the end of a monthly cycle and then projected annually. To calculate churn as more of a leading metric, check out this article from Shopify Engineering, where they use customer days as a way to calculate churn as more of a leading indicator. Leading metrics show what’s happening now. Lagging metrics show you what happened in the past.

      Metrics are context-specific to the customer, solution, and business implementing it. Experimentation may be required to establish the correct metrics that work for a given app.

      But, how do you collect, store, and track these values?

      Tools: Google Analytics and Google Tag Manager

      There are many tools that can help you monitor app analytics, but Google has put out excellent Google Analytics and Google Tag Manager videos along with their Analytics Academy for anyone who wants to learn the basics. Here’s a quick introduction to put it into perspective.

      Google Analytics helps you gather, store, and organize your application analytics data. However, it does not provide a friendly way to track specific application event data. For more information about Google Analytics, check out Google Analytics for Beginners.

      To collect behavioral data, Google Tag Manager can be used.

      app analytics: google
      Example of a Google Tag Manager event and click trigger.

      Google Tag Manager has tools that help send event data to Google Analytics without up-front custom event tracking development investment while the app evolves. For example, triggering data collection for click events show many people use specific features. Using Tag Manager, you can send button clicks and form events that occur in your app.

      You might also like: How to Use Google Analytics to Improve Your Web Design Projects.

      It takes time and practice

      When done well, teams can re-hypothesize what customers want, and continuously inspect and adapt their process and progress to ensure the app is satisfying users and customers.

      Using app analytics to inform your decision-making process is essential for teams to create great apps. Successful developers use analytics to validate whether or not they’ve built something people find valuable. Teams achieve this when they:

      • QualifyWhat to build using qualitative customer feedback. 
      • IterateRelease new features with data collection capabilities.
      • QuantifyWhat was built using aggregated quantitative usage data.
      • VerifyAssumptions by analyzing user performance metrics.
      • AdjustAssumptions, questions, strategy, mindset.
      • Repeat.

      Applying this type of experimental inspect and adapt approach using app analytics may take some getting used to. It’s easy to get bogged down or paralyzed by large amounts of information.

      What you’re looking for are clues that help you make great decisions about what to build next, and whether what you’ve built is valued by the most important people: merchants and their customers. Validate what to build and whether you’re iterating towards building the right thing.

      Have you had any major changes to your app, based on data you collected and analyzed? Let us know in the comments below!

      About the Author

      Nole has been developing software on the web for over twenty years. He works as a Product Manager at Bold Commerce building Shopify apps, and is passionate about solving merchants' and their customers' problems.

      Grow your business with the Shopify Partner Program

      Learn more