Product Discovery is fundamental for Startups, here’s why?

According to research, 90% of new startups fail. The evidence in multiple studies shows that there isn’t any singular explanation for this, rather a culmination of different factors.

Key Takeaways: 

Discovery Process is crucial step for startup
Reason why Startup fail most of the time

One of the crucial steps to save startups from failure is product discovery activity. This phase helps you understand the primary value proposition of the product and its probable relation to the market.

What is Product Discovery?

Product Discovery; is “the data-informed reduction of uncertainty regarding problems worth solving and solutions worth building through a series of nonlinear activities, conducted as a cross-functional team.”

This is a critical stage for the sustainable life span of any startup or new venture. Without any product discovery, companies can not have any credibility to prove or disprove assumptions about their customers. In the simplest terms, if a company lacks concrete product discovery, chances are, they are more likely to base their products or services on wrong assumptions. In the past decade, product discovery practices have increased exponentially because they assist companies in validating opportunities in the dangerous world of product development.

Product Discovery typically means an (adjustable) period during which companies focus on building the right thing instead of building the thing right (which would be Product Delivery). In any efficient startup(s), Product Teams work either on the problem or solution spaces. They either operate to understand whether a problem exists for their users, customers, or stakeholders, or they are focused on executing a matching solution.

Although working in both spaces is necessary to satisfy users and support the business, a weak plan of action can often get the allocation of time and attention wrong. Usually, product discovery is about the problem space. This space requires developers’ attention and a thorough ideation process explicitly. Experts generally divide this space into further steps so that they are catered carefully.

Product Discovery phenomenon consists of a series of routes which the company has in mind:

Research

  • Curate assumptions around the product idea and see how it can assist the user.
  • Explore the present issues users are facing by interviewing them or conducting surveys.
  • Creating different user personas and demonstrating how each of their users will interact with the product.

After thoroughly creating assumptions, persona, survey results regarding the product, the next phase lies in solution space.

Alignment

  • Spending enough time analysing user problems and understanding them to develop effective solutions.
  • Defining core features of the product that will help the customers most in solving their pain points.
  • Inspect all types of risks involved in product development — value, usability, and feasibility risks.
Visual Representation of problem and solution space by Tim Herbig
To avoid confusion, the product team efficiently divides the problem and solution of the respective product in different channels.

Why is Product Discovery needed?

As we move towards a modern way of thinking, the agile product discovery stage is considered valuable and pivotal, primarily as this phase determines the foundations of aspiring products. The development comes about in stages. We start by developing in smaller batch sizes, presenting each batch to stakeholders, and eventually moving towards customer input when it is realised how important it will be that the customer should know the usage of the product.

At this point, the intelligent approach comes into play with user experience design and design thinking. This helps to focus on the prerequisite required for products; hence, there needs to be a significant focus on user empathy. 

This notion has continued to develop, with teams now asking whether users actually want what is being built. Most importantly, whether the units are solving the correct problems for their users.

Over time, new techniques have been implemented to ensure that the product discovery and delivery process is made faster. Specific teams have adapted the design sprint process, which involves a quick five-day session where outcomes and problems are discussed and solved vigilantly. Yet, a particular shortcoming is regarding the specifications of objectives and key results. If rushed, the team might not entirely be on the same page, leading to non-productive sprints. Hence, it is all about how a team organises sprints to ensure they are as productive as possible

How does the discovery process work?

During any product development life-cycle, the most critical role is played by the OKRs, which enable the team to answer the question regarding how to focus on specific outcomes and how can the team lead make sure that the entire team is aligning their goals to a particular result. By doing this, the team will be able to curate solutions based upon the most vital factors of any product, which are the objectives that the product aims to achieve. While deciding these to create a base for the product, the team needs to be quick on their toes and keep the learning and development process fast.

As the product team creates its process, it can begin focusing on the outcomes such as increased engagement or increased revenue while ensuring customer satisfaction and a high NPS score. This route helps them gauge a path, further developing and curating the solution needed to achieve the outcome and simultaneously overcoming the problem. The process can get somewhat confusing, which is why teams have now come up with a concept called The Opportunity Solution Tree.

This tree breaks down the product discovery process. Without this phenomenon in place, it’s not plausible to develop any product in an impactful manner. The tree starts with outcomes; mainly, the team needs to determine their precise desired results, which would help the user experience and express the metrics required to measure the achievement of these outcomes. This is precisely why the outcomes need a quantitative measurement, such as an increase in revenue by 10% or a high NPS score by five. By starting with determining the outcomes, the team gets well-equipped and conscious of creating potential key results which give them a path to follow as they progress through the following steps.

Once the initial matrix has been defined, the next thing the team needs to do is discover the opportunity that will drive the desired outcomes. It is a known fact that “one can not find a thorough solution without deconstructing the problem thoroughly”; hence defining a problem and framing it in a particular manner can significantly influence how we solve it and the quality of solutions proposed by the team. But it is important to note that the word “problem” can also become problematic for specific products. Problems encourage the team to fix something, but sometimes, products do not need fixing; they need to be developed further for better functionality.

As a start up, what to expect from this phase?

For any startup to have a firm positioning, this phase can make or break the deal. As the team learns their product outcome and opportunity, they can finally move on to the solution phase, where all the stakeholders of the product team will come together to ideate and figure out new innovative ways to achieve the team’s goal. While coming up with solutions, the team must keep conducting weekly experiments. Hence, they can constantly understand whether the data shows the desired results and whether the answer is viable.

Instead of waiting till the very end to conduct these experiments, if they are done weekly, the team can constantly look at the latest data and research to make intelligent choices regarding the next step in developing the solution. The running of the prototype every week to be tested will make sure teams do not waste time creating a product of no value. Their time and effort are now used productively. Each stakeholder, i.e., Product Manager, Tech Lead, UX Designer, is on the same page while developing a new product together.

Final Words

Multiple teams have adopted this practice in today’s world, but they do not realise their method is of a set pattern. With the focus on product discovery, teams are releasing products with due diligence, which are on point with the user experience and making sure that the team’s effort is worthwhile to create a difference through their work. Without understanding the overall picture and doing a meta-analysis, teams will not develop unbiased products and achieve their outcomes to reach their product opportunities in the best way possible.

These learnings from successful product teams have been used to design the product discovery workshop at Venturenox. In the workshop, we help you discover the needs of your users and the best positioning for your product. Once the development starts, we continue to learn from your product’s users and take all the proper steps for your product.

Deep Learning – Overview of History

Artificial intelligence has become mainstream since a few years and has generated a lot of interests from the governments and private organisations. Several governments have created ministerial positions for the subject, and militaries around the world are preparing for an AI based weapons race. The hype around AI and its anticipated impact is at par with the industrial revolution.

In this article, we will try to dig down to the basics and explore why artificial intelligence has become mainstream only now, and the promise it holds.

The Genesis

The idea behind machine learning was first suggested by Alan Turing in 1950 in his paper “Computing Machinery and Intelligence“, where he explores if a computer can do a task indistinguishably from a human consistently. This idea is commonly known as the Turing’s test or Imitation Game. Turing also explored how such a “learning machine” might be created, and predicted that through technological advances, a machine will be programmed by the end of the century which will be able to play the imitation game. He also suggested that problems like playing chess and understanding natural language are good areas to start.

Early work in the area of intelligent machines adopted one of two broad approaches: rule based, and statistical.

Some researchers worked on heuristics based systems and believed that human behavior or intelligence could be effectively captured through a set of elaborate rules. This approach required building exhaustive grammars and formal systems to encompass a problem domain. Expert systems which saw a rise in the 1970s and 80s were part of this approach.

Other researchers leaned towards a statistical approach and leveraged several principles of statistics and mathematics to create machines which seemed to “learn”. Early efforts were focused more on pattern classification, and with time several sophisticated tools were developed which were used to solve problems in many domains.

The statistics approach required considerable data for experimentation and learning. This approach also required a lot of computing resources. These limitations resulted in a slow pace of research in this area until the 1990s.

In parallel to these efforts, some researchers were also working on artificial neural networks, which they designed on the likes of a human mind. Early research in neural networks was focused on mimicking the human brain and learning patterns. Later the research deviated from biological models and more towards solving domain specific problems.

Early neural networks were simple and consisted of a few artificial neurons connected to each other. These neurons receive input signals which are assigned weights. The sum of the weighted signals determines whether the neuron passes the signal onwards or not. When combined with mathematical optimization techniques, each neuron progressively learns to assign better weight values to the input signals. When these neurons are arranged in layers, each layer incrementally learns more abstract ideas.

Like the statistical approach to machine learning, the research on neural networks was slow until the 1990s due to lack of computing resources and data for learning.

The Revolution

By the 1990s, the researchers who believed in rule based systems were starting to hit limitations. In real world systems, the information is not always certain and a lot of determinations are subjective. In areas such as speech recognition, statistical models such as Hidden Markov Models were already gaining success. Some limited image processing tasks such as recognising handwritten ZIP codes were being done with neural networks, but their training required a long time and a lot of computing resources.

However, a few technological and scientific breakthroughs lead to greater adoption and progress in neural networks over the next two decades. The first obvious factor was improvement in capability of computers along with their cost becoming much more affordable. Another factor was that more and more data was being digitized especially with the widespread use of the internet. This made the challenge of availability of training data much easier – at least for the large internet companies.

The deep learning revolution started around the year 2010. In 2009, the researchers at Google attempted to use an Nvidia GPU for training a neural network. They found out that GPUs were about a 100 times faster than CPUs for training neural networks. While neural networks with several layers (deep neural networks) were around for a long time, now it was actually feasible to train them over large data sets.

In the early 2010s, the researchers found that using lower quality data for training neural networks doesn’t hurt the ability of the trained model to make accurate predictions. The reason is that deep neural networks are very good at handling uncertainty. So instead of using high resolution images to train image classification models, and using segmented speech data (manually separated words or phonemes) to train speech recognition models; suddenly the researchers and engineers were using low quality images, and noisy, unsegmented speech data to train deep learning models with much higher accuracies.

The essence of end to end deep learning is that a deep neural network trained on GPUs over large size of low quality data with a lot of connected layers can learn patterns very effectively from low quality, raw data. There is very little need for pre-processing or identifying features of the data through usual scientific means; the deep neural network will do that itself.

     

Images courtesy Christopher Manning & Russ Salakhutdinov, JSM, 2018-07-29

With these revolutionary breakthroughs, deep learning has been used to obtain much better results in solving old problems. Some of the use-cases such as image classifications have surpassed human performance as well.

What you think can be done by humans can be done by deep learning given you narrow down the problem statement and collect enough data.

Conclusion

While deep learning holds a lot of promise, it doesn’t come without its challenges. Deep learning works well only when you have a lot of annotated data available for training. Many times people who actually have access to such data are not aware of the power it can unleash. Management of such datasets and their training also requires a lot of computing power. Once you are aware of the power of deep learning and have an actual use-case with commercial value, you will start to see the possibilities.

Venturenox has worked on some very interesting problems using deep learning. We have built a facial recognition system which gives close to 100% accuracy. We have also worked on a wildlife detection system. We are currently working on an advanced use case in bio-medical imaging and a neural-networks based disease prediction system. We have worked hard to obtain skills and develop vision which is required to build successful products using deep learning; and we would like to work with people who want to change the world.