How To Improve Your Apps
Collecting feedback is an important activity as it will help you prioritizing on app improvements in your product roadmap and release planning. This feedback can be used to support your business case for necessary changes and updates.
~ Written By Thomas Wesseling
Building great apps and achieving high ratings isn’t easy. Developing apps require new skills in different fields of expertise as I have already explained in one of my earlier blog posts. But how can you improve once your apps have been pushed out to the stores? There are many ways to find improvements for your apps and analyzing your feedback loop could be good starting point.
Collecting feedback is an important activity as it will help you prioritizing on app improvements in your product roadmap and release planning. This feedback can be used to support your business case for necessary changes and updates. And this is crucial to obtain budget for new releases and keep on innovating your app. There is not an unlimited source of money available so the development team should figure out where to focus on and how to report this to the respective business owners.
The product owner will eventually convert improvements into epics or stories on the backlog for the development team. Once prioritized both the product owner and scrum master can start working on a clear set of requirements for the developers and add the stories to the sprint backlog. Getting the right information out of your feedback loop is a continues activity in your app development process and crucial to maintain focus. Below are a few pointers that should help you getting the most out of your feedback loop.
1. Learn from other apps
There are plenty of existing apps out there and in most cases you will find similar apps in the same category that have very good ratings and a high adoption rate. You can use these as a benchmark for your apps. Most apps use common design patterns for i.e. registrations, integration with 3rd party apps and services. Next to that Google and Apple regularly promote apps with a successful implementation of platform specific design guidelines that app users are familiar with.
When useful? Doing benchmark research doesn’t require much time and it is very useful to get inspired for improvement.
Any cautions? Manage expectations correctly towards the business owner and your team. To get to the point of 4/5 star ratings, app developers have been failing, learning and improving for a longer period of time. Prepare yourself and the team to release and improve in iterations instead of aiming for a 4 star app right from the beginning.
2. Learn from feedback in the app store
The most accessible feedback would be the user reviews in the app store (readable by everyone).
When usefull? If you have a very active user population they will rate your app and sometimes write reviews.
Any cautions? By only looking at the ratings (whether those are low or high ratings) you could miss essentials details on why your app is good or bad. Collecting app store reviews might not give you the complete picture. Apple currently does not offer options in iTunes Connect to respond on feedback from the Appstore (the Google Playstore does) so you will need to think of other ways (see below) of interaction. For all published apps you can use the release notes section of the app store to announce new features and improvements based on user feedback.
3. Implement in-app feedback options
Offer the user a feedback option inside your app. The most simple way is to trigger an email with pre filled text (variables) from your app like high level information about the user, the screen flow and even technical information about i.e. the app version and device used. Of course there are more advanced ways to embed feedback forms in your app but triggering an email is easier to implement and most users have at least one email app configured on their device.
When useful? In-app feedback options should be the most accessible feedback option for the app user. Submitting feedback should not require more than 2 or 3 taps. Because there are options to add variables from the app in the feedback template the feedback is more contextual.
Any cautions? Before adding in-app feedback options discuss with your design team on how to add this option in a discrete way preventing any annoyances (i.e. if it uses an external email app for sending feedback the user should be informed about leaving the app). For the simple email implementation you need your support admins to create a shared mailbox to receive the feedback. Make sure that your product support team is moderating the email questions. The product owner should have access to the shared mail box to collect feedback and use it for reporting if necessary.
4. Learn from appstore analytics data
Analytics data for your app is available for everyone who has been publishing an app to the stores by simply logging on to your app store account and access the analytics/statistics section.
When useful? General app performance data and metrics can be obtained from the app stores but is mostly limited to high level information such as app store views, app downloads, app sales and (high level) crash reporting. If this is good enough for your reporting you are good to go and use this data.
Any cautions? Again there is a big difference between Apple and Google here. If you need to get insights on platform specific analytics for i.e. your Android apps you can get a tons of information from the Google Playstore. The same is more or less valid for iOS apps in Apple iTunes Connect with the big difference that Apple is collecting data based on opt-ins. This means that only some users, with data sharing switched on, are contributing to detailed app store analytics. When you need to get more insights on the performance of your apps (independent of the platform used) switching to a more detailed analytics tool could be necessary (see next item #5).
5. Learn from in-app metrics
In this case metrics are collected in analytics tools such as i.e the Adobe, Google Analytics or other solutions. Most of the analytics tool require embedding of an SDK component in your app which have to be available cross platform. With this your are able to receive out of the box analytics data (like available in the App Store ) but on top of that you can manually add events in your app that generate specific in-app metrics.
When useful? A big difference from the standard analytics data out of the App Store is that there are technically no opt-in limitations as described before. You have much more flexibility on getting cross platform metrics all well as detailed and specific in-app metrics per platform type. Examples are screen hits, button presses and funnels for conversion data reports. Collecting in-app metrics requires more efforts but is great for getting more in-depth insights on how your app is performing.
Any cautions? Data collection should only be used for optimizing and improving apps and limited to anonymized data. Before implementing in-app analytics make sure that you discuss with your legal department on the data that you can or cannot collect and how to inform the user about data collection in your app. Due to privacy rules, storing personal identifiable information in analytics clouds is not allowed in a lot of cases. Special agreements can be made to switch off certain data collection options on the vendor side (such as measuring the source ip). Your test/audit team can validate this before pushing the app live.
6. Do beta testing.
60% of the most popular apps on the Google Playstore run beta programs according to Google. The nice thing about the Google Playstore and Apple Appstore that both can support you in running a beta program for a limited group of testers that you select. Most of your users already have an App Store account which makes beta testing through the App Store very accessible (no additional registration work is needed).
When useful? Beta testing offers you a way to test prereleases of your app in a controlled fashion. Apps can be installed on lots of different devices with different form factors. Although there are ways to do automated UI testing in the cloud, beta testing allows you to get feedback from a relatively big group of real users on usability, performance matters or even crashes. And this is valuable information for the development team as users can share their feedback one-to-one with the developers. Involving users in beta testing allows you to get feedback at first hand and tackle the the bigger errors before pushing your app to a bigger crowd. This could save you some nasty reviews in the app store.
Any cautions? Before running beta programs make sure that you have release management processes and tooling in place. Your team should be able to deliver automated builds of your app, signed with the right certificates and stored with logical version number in the file name and app properties. This allows the testers to give feedback and report bugs linked to specific version of the app. A tool like Hockeyapp allows you to setup continues deployments for your app but requires beta testers of iOS apps to register their device UDIDs separately. Running a beta program through Apple’s TestFlight is more user friendly since the TestFlight app makes installing beta apps simple, with no need to keep track of UDIDs or provisioning profiles.
If your app launch date is time critical you could consider internal beta testing as soon as you have a minimal viable version of your app. Once your app is complete enough to be useful for a bigger crowd you can scale up to a public beta test. During beta testing period you can monitor both the app and the backend performance by checking the (in-app) analytics and the server side logging.
Announce your beta tests through different channels. The Google Playstore offers a promotion banner and button for users to opt in. Use your website or other channels like mailings or print (QR codes) to promote a beta program for your target audience. Give yourself enough time to collect feedback and prepare improvements in several releases for your app.
7. Do usability testing If certain features are not yet released and still in a design/concepting phase important feedback can be gathered from usability tests in a (closed) lab environment or just by interviewing users in public. Professional usability labs have equipment such as test devices, eye tracker camera’s and video recording tools.
When useful? Organizing a usability test makes sense when your app is getting a design overhaul or if you are introducing large extensions that requires multiple iterations of design work. Usability tests can be done with clickable prototypes or fully functional apps. If you don’t feel like burning all your money on development, there are lots of tools available to create clickable prototypes that can run in full screen mode so that the tester (to some extent) can experience the look and feel of real app.
Any cautions? To get the most out of your usability tests make sure that you reserve enough time for the preparations. This means that if demo accounts and devices are available and have been setup correctly to support your test scripts. Because a clickable prototype is not behaving the same as a fully functional app prevent any complicated tasks for the usability testers. When developing a clickable prototype the most of the work is in designing and implementing the flow. Depending on the type of app and uses cases make sure that you invite the right testers. You might need to define persona’s first so that you will invite testers that represent your user base.
To read the original blog please visit: http://labs.sogeti.com/how-to-improve-your-apps/
- Sogeti UKMake an enquiry
0330 588 8200
Sogeti UKMake an enquiry
0330 588 8200