Defining the Design Problem

You’ve heard it before: design is about solving problems.

Whether it’s building a new playground or developing a mobile app for pet groomers, there are multiple ways to satisfy a project brief. However, in order to design a product that successfully delivers business value, it is critical to first clearly define the design problem.

Ask your clients these three key questions at the start of every project:

  • What is the business objective?
  • What is the context of product use?
  • What are user goals?

What is the business goal?

This is the most critical question that some design teams still don’t ask stakeholders.

Understanding business objectives helps your design because it allows you to drill for more specific information. Follow-up questions can unlock a wealth of insights that influence the design approach:

  • How do you know this is an issue?
  • Who is affected by the issue?
  • When and how often does this occur?
  • What benchmarks do you have and what change do you expect?

Imagine that your client aims to reduce tech support calls for an e-commerce site.

If customers struggle to complete purchases, drilling into root causes might reveal that logging into an account is a major hindrance, or that the website refuses to validate shipping addresses. Interviews with tech support teams can also reveal pain points that customers are experiencing.

Understanding business goals also helps the design team focus and refine work through iterative user testing before full product launch.

For instance, if time on task is expected to decrease by 15% following an interface-lift, that’s a clear target to test against with prototypes.

What is the context of use for this product?

Answers to questions using where, why, when, how often, and so on, describe the context of product use and elucidate multiple design decisions.

At a macro level, context informs what technology should convey the design. At a micro level, context places restraints on interactions and the visual treatment of the interface.

Imagine a food manufacturer who wants his quality control technicians to enter production data (such as oil temperature) on a kiosk-based laptop on the factory floor. On the surface, this is a simple problem. But would this be a wise technology choice if the technicians have to enter multiple production values every five to ten minutes? A tablet that the user can carry would be a better choice given the context of use, but if the client doesn’t volunteer such information, how could the design team know to make this recommendation?

Technology platforms have their own sets of best practices and capabilities. However, designers still have to consider interactions and visual treatment of the interfaces.

For instance, an athlete might count out loud a number of burpees to her smartwatch, yet ambient noise could obscure her voice commands. Similarly, luxurious colors and fancy button animations are suitable for a gaming app but not for a paramedics’ emergency response devices when used at night.

Drill deep to understand the intended context of use by interviewing and observing users in their environment. Do not assume that your client has done the necessary research to uncover the needs of her users, or that she understands the implications of context requirements on the design.

What do users expect?

Business and user goals can be very different. Successful design finds common ground to satisfy them both.

Business stakeholders are often biased or completely naive about their users, making it all the more important to conduct research directly with the intended audience. Understand not only what users need to do, but also what motivates them and what attitudes they have toward their tasks.

When business and user objectives are mapped out, designers should create user flows that support desirable user behavior while satisfying user needs and aligning with their attitudes.

For instance, Amazon prompts shoppers with additional products while at the same time offering hassle-free one-click ordering. Similarly, TurboTax has helped its success by using a clean and playful design that supports users during a task they likely find tedious, unpleasant, or even anxiety-inducing.

That said, a fine line exists between supporting users and driving the client’s business. Calls to action that appear too frequently or lack of information to drive user decisions will fail to satisfy either the users or the business.


Design briefs present a problem that can be solved in different ways.

By investigating business objectives, context of product use, and user goals, you’ll gather necessary data that helps narrow down and refine a single design approach.

Data—rather than assumption-informed design—is the secret sauce of successful business products. You just need to ask the right questions.

This post was originally published here.

Bringing UX to a Startup

Over the last few years the meaning of user experience shifted from ‘making pretty websites and mobile apps’ to ‘key business strategy.’ Yet, while many organizations recognize the value of investing in user experience, many struggle to deliver it effectively within set timelines and budgets. The secret to success lies in understanding key user experience activities, their outcomes and value in the product development cycle. This talk will cover:

– What are the key user experience activities?
– What are the outcomes of key user experience activities?
– When should each user experience activity be performed?
– Who should perform different user experience activities?

This talk was presented at University of Illinois Research Park/EnterpriseWorks in April 2017

Human Factors: The Secret Weapon of Successful Enterprise UX

The business benefits of intuitive, user-friendly and pleasant technologies have been highlighted in various sources, including prominent media outlets like Harvard Business Journal and Forbes. Delivering positive user experiences (UX) is a task for a multidisciplinary team. First, researchers investigate and document the current state of user experience. Then, designers craft solutions to address the pain-points and challenges in the present user experience. And finally, engineers translate designed tools into tangible technology products that yield much improved user experience.

Standard Research Methods Do Not Always Work

The first step in the process of delivering positive user experience is user research. In short, it is a multi-faceted approach to understanding users, business and technology they deal with. Contextual inquiries, surveys, interviews are among the most common user research methods and yield great insights. However, multi-tool, multi-user role, multi-location, multi-step and process situations present much more complex user experience problems. Here, the standard research methods fall short.

When investigating user experience in complex environments, human factors expertise is an absolute must. Human factors is a scientific discipline of understanding how users interact with their environment from psychological as well as physical perspectives. Psychological facets focus on problems like “how does the user remember and execute the exact sequence of buttons to press to initiate a rocket launch?” Physical considerations deal with tangible objects and address questions like “does the size of the buttons on a keyboard allow easy and error-free data entry when user is wearing gloves?” Answering such questions requires training in human factors methods and theories, something that most user experience practitioners do not possess.

What is so special about Human Factors?

Both psychological and physical aspects of technology use matter, and especially so in the enterprise domain. If technology is not optimally designed to prevent errors, speed up data entry, be convenient to carry around and so on, it negatively impacts business operations and can even lead to severe accidents and disasters. Of course, standard user research methods can unveil the need for a streamlined process or improved design of an interface. However, the methods for documenting and quantifying many of the relevant components of user experience, such as critical and non-critical decision points, key-strokes, click-paths, and cognitive fatigue, come from the human factors domain. For instance, human factors experts can quantify exactly and in several different ways how distracted drivers are who text on their phones. Such insights make significant impact on design of enterprise software, especially tools used in industrial environments.

User research in any technology design process is critical. Methods for user research should be selected by experts as they know best what pros and cons of each are, and which ones will help meet the business goal. Human factors expertise may not always be required as part of the user research process, but it sure is a secret weapon when handling complex enterprise operations and technology problems. Make sure your research team has appropriate expertise for your business and technology projects.


This post was originally published here.

Heuristic evaluations and usability testing are critical to your business

Did you know that difficult to use, ineffective, cumbersome software leads to low user adoption which accounts for about 70% of failed projects? Creating user-friendly products is crucial to user adoption and success. How can you make sure that your product is user-friendly? Heuristic evaluations and usability testing are two powerful methods that help ensure user-friendliness of a product. Every business should care about how intuitive and easy-to-use their devices or software are because these factors significantly influence user adoption rates and, as a result, the financial success of the company.

Imagine a food delivery website such as GrubHub, EAT24, DoorDash or OrderUp: if the user presses a button to complete his lunch order, but the web page doesn’t change, the user is left with the impression that either he did something wrong, that order was not placed or that the system is broken. However, the backend processes may be fully functional and the chefs may have started cooking! Such user-system miscommunications cost companies lots of money, and can be easily eliminated by performing heuristic evaluations and usability testing before product launch.

What are these methods?

Heuristic evaluation consists of an expert assessment of a product’s usability against a checklist of usability standards. Depending on product complexity, heuristic evaluation can be completed in as little as a couple of days. Evaluators capture inconsistencies in the design, problems in navigation, issues with errors shown (or not shown!) to users, and other interface elements.

When a usability test is conducted, a user researcher observes how intended product users interact with a prototype and whether users can complete various tasks. Researchers can record various metrics about user behavior, such as task completion failure and success, time on task, number of errors, perceived ease of use and so on. Usability testing reveals what features and functionality of the product can be improved or removed.

Both of these methods are completely technology-independent and can be utilized on all types of devices, from microwaves to augmented reality headsets.

When should these methods be used?

Heuristic evaluations and usability tests can be utilized at any stage of a project, but the earlier it is done the better. Heuristic evaluation should be conducted after each design sprint. When prototypes incorporate heuristic evaluation feedback and the interface is updated to remove major flaws before users ever see the design, usability testing can reveal subtler usability issues that are pertinent to the identified audience.

If heuristic evaluation or usability testing cannot be conducted early in the project life cycle, it is still better to conduct these activities later rather than never. As the graph below shows, the cost and project time increase exponentially with the increasing delay of implementing changes to the interface or information architecture. The highest cost to re-design and re-code the product is incurred after launch; this can be mitigated when user testing activities are incorporated into the project lifecycle.


Why do these methods work?

Heuristic evaluation and usability testing both reveal design flaws that would impact usability, user experience and user adoption rates, all of which directly affect the business success of any product. When these methods are utilized, design problems can be prioritized by severity which enables the team to cheaply and quickly fix the issues before launch. This significantly reduces overall project cost and timeline, minimizes rework to fix bugs and address user feedback post-release, and increases the likelihood of user adoption as a result of high product usability. Because of their direct and significant impact on business’ success, there is no excuse to skip heuristic evaluations and usability tests during product development life-cycle.

This post was originally published here

User Research is NOT Market Research

The term “research” is not new to the business world, yet its meaning varies along a wide scale. In some companies, research is identified with genius engineers pushing technology capabilities in Research & Development departments. In other realms, interns proudly present their team leaders with research findings gathered from the numerous corners of the internet (thank you Google!).

The masses reside between these two extremes and that is where things get muddy. Many stakeholders identify “research” with market research activities and results. However, in the technology space where identifying user needs, product requirements and design strategy are critical, user (not market) research provides the best insights. To business stakeholders, the lines between the two may not be clear; hence, we clarify the differences below.

Market Research

The focus of market research is on the consumer in the market economy; specifically, his or her demographics and purchasing behavior. Here, research uncovers which buckets customers fall into as it pertains to their gender, ethnicity, income and education levels, areas of residence and work, shopping preferences, social media engagement, and so on.

The outcome of such an inquiry results in Target or Buyer Personas which are used to inform business decisions about what might make this Persona receptive to a product or service, what return on investment might be expected, and how to best market to this customer.

However, because market research is focused on current patterns of consumer behavior and does not delve deep into reasons behind them, these Personas lack detail for defining design requirements, product functionality, and prioritizing features.

User Research

Unlike marketing research, user research places the focus on the user as a whole entity in the context of his or her environment. Researchers answer questions like “what does a day in her life look like?”, “what activities does he engage in?”, “what motivates and frustrates her?” (which is also in buyer personas) by utilizing ethnographic research, interviews, surveys, usability tests, A/B tests, analytics of daily behavior, diary studies, and other methods.

This multi-faceted approach to understanding the user uncovers various pain points and needs, even those that users cannot verbalize themselves. Such insights are key to driving innovation in business. User needs communicated in User Profiles present direct design requirements, product or service functionality, or convey novel problems to solve.

Side by Side

Market research reveals what has happened up to now, especially as it relates to buying behaviors and patterns, but does not indicate where a business should go next. User research, on the other hand, reveals not only what but also why users are currently experiencing something. When frustration or pain points are discovered, they open doors for business opportunities to address those user needs.

For example, if the business objective is to engage with mothers of middle-school athletes, market research will reveal the best avenues for advertising to these Personas and for connecting with them. User research, additionally, might reveal that these mothers struggle juggling multiple athletic event schedules and constantly transporting their kids to various activities. Addressing this user pain point through a service or a product could become a profitable line of business.

Clearly, both market and user research have their purpose. For ground-breaking innovation in the business space, market research only scratches the surface of users’ lives. For greater insights, user research is a must.


This post was originally published here

Building Blocks of Good Graphs

Selecting an appropriate chart type to present data visually is one of three critical steps to creating an effective visual vessel of information. The second step requires that supporting details about the data are provided. Axis labels, legends, units and other elements deliver the context within which to interpret results. Lack of such details requires the audience to make assumptions about the data, or utterly confuses them; in either case, it is a failure of clear communication. Here we discuss the building blocks of graphs, how to implement them effectively, and illustrate suggestions with a couple of examples. In the figure above, inspired by Stephen Few, we illustrate the many different elements that compose various charts and which we reference in our current discussion.

Core elements

Axis, category and unit labels

Axes reflect what kind of information is presented and ought to be labeled. Numerical axis labels should include the units a data are measured in. For example, if company revenue across different continents is reported, do the numbers represent profit in dollars, yen, rubles, or some other currency? When a measure from a standard tool, such as a Net Promoter Score or System Usability Scale (SUS), is reported, the metric should be referenced in the axis label in lieu of units.

Axis that represents data category does not necessarily need an axis label, but must include category labels if more than one data point is shown. For instance, if monthly revenue is presented, months would serve as category labels for each numerical value shown in the chart while axis label (“Months”) can be omitted. When a single data point is shown, either an axis or a category label is sufficient. See elements B through G on the figure above.

Scale, numerical axis and tick marks

A scale divides an axis into equal segments, and tick marks denote those even segments of the scale. Tick marks on quantitative scales establish where on the axis specific number values are placed. Intervals that are nice round numbers, such as 10s or 100s, make it easy to read the chart. Tick marks should be designed to minimize visual clutter while still allowing the reader to quickly reference a data point to its value. A good test for too few tick marks is whether the audience can quickly and easily extract (an approximate) value of a particular data element in the chart. If not, then the number of tick marks should be increased.

With very few exceptions, numerical axis should always include 0 for appropriate representation of values. Manipulating the size of scale increments and its minimum and maximum points will distort the data. If the goal is to demonstrate a difference between data points, appropriate statistical procedures instead of manipulation of numerical axes should be used for supporting evidence.

Tick marks are also used on categorical axes. However, because category labels already denote distinct data points, tick marks serve no additional purpose and should be eliminated to reduce visual clutter. See elements G and J on the figure above.


When more than one category of data is presented in a chart, a legend informs what different colors or patterns represent. Occasionally, variables are labeled directly on the chart, omitting the need for a legend. Line graphs can often be shown without a legend by labeling categories next to the lines that are used to represent them. However, the choice to label categories directly on the graph should be made after considering data and graph design complexity; because a legend reduces visual clutter, it often is a better choice than labeling categories on a chart. A legend can be omitted when a single category is shown and category labels are used to describe the data.

Legends should be placed closely enough to the data components for easy reference, but in a way that does not interfere with the data shown. Moreover, because they serve a supporting role to the graph, legends should be designed to look less prominent than data elements. For example, they should have no borders around them and should be fairly small while still legible. See element H on the figure above.

Chart title

The title communicates what information is presented in the chart. An effectively worded title removes the need for a subtitle that can visually clutter the chart. Whenever possible, the title should serve as the main takeaway of the data rather than a general description of what is shown in the graph. For example, while “Mobile device breakdown in Houston” informs the reader what the data are, the title “Majority of Houston residents have iPhones” expands that description into a summary of results. Titles are typically placed at the top of the graph and should be positioned closely to the information they describe without interfering with it. See element A on the figure above.


Graphs occasionally include additional elements, such as confidence intervals or data labels, that may not be necessary in every instance. When designing the chart, consider whether these components serve any real purpose, or whether they only add visual clutter.

Grid lines

Tick marks are usually sufficient to reference a data point to its value when graphs are fairly simple and show a small set of data. However, when charts are wide and contain large sets of data, grid lines help guide the eye between data categories and their numerical values. In large graphs, grid lines also help increase precision. Moreover, grid lines can be used when the goal is to highlight small differences between data values. In the cases where grid lines are used, they should be designed to appear less prominent than the data, and used sparingly to avoid visual clutter in the graph. See element L on the figure.

Reference lines and zones

A reference line or zone can provide context for the data by visually comparing it to some predetermined value. Reference lines or zones are especially useful when the goal is to show data deviations from the norm or to highlight that a benchmark was met or surpassed. For instance, when presenting a software usability score (as measured by SUS) to a stakeholder who is unfamiliar with the metric, highlighting the desirable range for such scores will help her meaningfully evaluate the reported results.

Reference lines can also be used to highlight significant events during the period of data recording that may have impacted the results. For example, e-commerce website traffic may significantly increase at the start of a marketing campaign and drop off soon after the campaign wraps up; marking such an event on a website traffic report can help stakeholders make informed business decisions about their marketing strategy and website. See element M on the figure above.

Trend lines

A trend line on a graph shows an overall change in the pattern of data across time or some other variable represented by the horizontal axis. Trend lines can be useful for highlighting a pattern in the data that may otherwise be obscured by individual data points. For example, if a particular stock price varies drastically over a period of time, it might be hard to gauge whether overall its performance is improving. A trend line, in this case, could show a decline, improvement, or no change in price over time. However, if data points themselves already show a clear pattern, then trend lines only add visual clutter and should be omitted from the graph. If trend lines are used, and if the overall pattern of data is the main takeaway of the chart, the trend line should be visually highlighted as more important than individual data. Alternately, if the trend line is secondary to the graph, it should be visually subtle. See element K on the figure above.

Ranges and error bars

In some cases, variations in data values, as reflected by such measures as standard deviation or confidence interval, may be required in the graph. Visually, a range or an error bar is shown as a horizontal or vertical line extending past the data point. Whether such detail ought to be included in the chart is dictated by research methods, statistical analyses, type of data reported, and by the message the graph is expected to communicate. If the range of values is more important than individual data points, then error bars should be visually prominent. Otherwise, error bars ought to be subtle.

Data labels

Data labels show specific value information for each data point on the chart. The purpose of graphs is to visually showcase patterns in the data and not to communicate numerical precision; tables are much better suited for this goal. Therefore, as tick marks on a numerical scale and grid lines already allow to quickly reference data points to their values, data labels are redundant and add visual clutter to the chart. In the rare case when numerical data are presented on a chart without a quantitative axis, data labels can be used. In such instances, ensure that numbers are rounded up to reduce precision when it is not necessary, and position data labels relative to the data points in a way that makes it easy to read and reference the information quickly without cluttering the chart. See element N on the figure above.

Do’s and don’ts in practice

Many options in terms of chart type, layout, style, and supporting elements are available. Here we illustrate a couple use cases for poor and improved graph design.

Bar charts

Case 1: Bar graph

Axis and chart titles on the right side graph provide focused information about what the data show. Specifically, the good graph specifies that preference ratings were measured on a scale from 0 to 10. Additionally, the title on the right summarizes data in a single takeaway message. As shown on the left side graph, slanted category labels can create a rough visual edge, especially when label length varies, and can be harder to read than horizontally placed labels; whenever possible, opt for the latter. Because only three data points are represented, it is easy to extract their values by referencing the numerical axis. In the example on the left, the grid lines add unnecessary visual clutter and can be completely removed from the graph. Additionally, the scale on the numerical axis can be simplified and include tick marks for every 2 rather than every 1 point. Finally, error bars on the right side graph are less visually salient than on the left, but still provide information about variability in preference ratings in the sample.



Case 2: Line graph

As in the previous example, the graph on the right has more descriptive axis and chart titles that provide focused information about presented data. A reference line on the right indicates occurrence of a business event that provides additional context within which the audience can interpret data. Data labels clutter the chart on the left side and are not really necessary. As shown on the right side, each data point can be quickly aligned with its value by using the grid lines. The scale on the numerical axis can also be adjusted to show larger scale increments. Tick marks on the categorical axis, as shown on the left, serve no purpose and can be removed to reduce visual clutter. Circles on the right graph highlight exactly where on the line the data points are placed, and allow an easy reference between a point and a month it is associated with on the categorical axis. Finally, labeling data directly on the chart removes the need for a legend and reduces visual clutter.

To conclude

Once the appropriate chart type to present data visually is selected and supporting details about the data are provided, the last step in effective visual communication with graphs is to carefully design these charts. Design ensures not only that the graphs look aesthetically appealing, but also, and more importantly, that the intended message is communicated. For example, color or texture can be used to highlight a particular data value from a larger set. Similarly, a color gradient can emphasize change in data, such as increase or decrease in cost of operations at a business, over time. A poorly designed chart may not only fail to communicate the intended message, but may also mislead the audience. In the business realm, visualized data are used to inform decisions; hence, graphs that are unclear or misleading can negatively impact operations and the bottom line. Successful data visualization stems from the synthesis of appropriate research methods, analysis, and keen application of design principles.

This blog post was originally published here

Leave User Research to the Experts

User research and user-driven data strategy are the buzzwords of 2015 market trend predictions issued by giants like Forrester Research and the Harvard Business Review. Data about users, their environment, needs, attitudes can inform business decisions and drive product strategy. But what exactly does user research entail? To many in the business realm, research is a vague concept and covers activities from searching for information online and interviewing people to administering surveys and maybe even performing usability tests. It is understandable that business professionals rarely have a scope of knowledge and appreciation for the methods and tools at the disposal of a trained user researcher. Yet, precisely for this reason stakeholders should leave the research strategy in the hands of an expert rather than lock the research team into a specific method of data collection.

Research methods employed to shed light on how to target business goals come from a range of disciplines such as human factors, psychology, sociology, business management, marketing, and so on. While a list of these methods and their descriptions can span pages, technique flavors and intricacies are likely to be of little interest to a stakeholder (and, hence, are not included in this post). What is important, however, is that each method has its own pros and cons, and affords different kinds of insights. Thus, a data collection strategy should be chosen by the researcher only after evaluating business objectives, project scope, budget, resources and time available to carry out the research (such as access to users and technology needed for testing), and weighing the pros and cons of each method in light of these considerations. A good strategy combines various techniques in order to offset the cons of each by their pros, gather converging evidence and maximize insights to drive future decisions. Let’s consider how this plays out in a couple of scenarios.

Scenario 1

Imaginary Company wants to visually freshen up their e-commerce website and increase order conversion rates; that is, while customers add many items to the shopping cart, some orders are never completed. To help Imaginary Company meet its objectives, designers can use their experience and best practices to change shadows, colors, fonts and other details on pages. However, data should inform all decisions to change page layouts or navigation through the website. After all, design first and foremost must serve a specific function. If Imaginary Company has only a week to gather user insights that would then inform website re-design, the research strategy could consist of website analytics, heuristic evaluation, content analysis, and maybe even a guerilla usability test.

In this example, website analytics will reveal the point at which website visitors stop placing an order or leave the website. For example, many users might quit on the registration page if it asks too many questions that users consider irrelevant. A click pattern analysis can also indicate what areas and buttons users do and do not interact with. A heuristic evaluation, where the website is evaluated for usability problems based on recognized principles, will reveal design flaws that contribute to the user experience that hinders customers from registering or placing orders on the website. A focused content analysis, driven by insights from website analytics, can reveal inconsistencies between instructions, descriptions, and error messages that users are exposed to. Finally, if time affords a guerilla usability test, some users can be asked to place an order on the website. Here the researcher can learn not only about how the user feels, but also gather insights about what affects the user experience in positive and negative ways. Executed in a short period of time, these combined methods will reveal particular issues with the layout of various website pages, registration procedures, and highlight areas for improvement that will then allow designers to focus their expertise and tools on addressing business objectives of the Imaginary Company.

Scenario 2

A Fictional Business sets out to implement a solution that should help warehouse forklift operators complete work orders more efficiently and thus reduce overall operating costs. When a couple of weeks are available for user research, contextual inquiry, interviews, and task analysescan be conducted to gather solution requirements. Contextual inquiry will allow researchers to observe forklift operators in the context of their work, identify specific processes and task steps, how long tasks take currently, what tools are used, who the operators interact with, how often, for what reasons and why. Insights from a contextual inquiry can help focus the interview process, allowing the researcher to further probe how user experience is affected by various components in their work environment. Finally, task analyses break tasks into subtasks and specific steps at a very fine level of detail allowing the team to discover areas for improvement. Together, these methods can reveal the overall processes in the warehouse that currently contribute to limited efficiency of forklift operators, identify bottlenecks in procedures, points of frustration for workers which can be addressed by a designed solution.

Armed with these user insights, designers can determine what kind of solutions, including technology, user interfaces, and process changes would meet the objectives of the Fictional Business. For example, if forklift operators spend an unreasonable amount of time filling out paperwork as they are completing work orders, a potential solution could entail digital document filing on a mobile tablet, where form fields are automatically populated based on the location of the forklift and various products in the warehouse, time of day, who the user of the tablet is and so on.

In conclusion

When engaged at the start of a project, user research defines and refines a list of project requirements, allowing designers to focus their expertise and tools on addressing business objectives rather than intuitively creating a solution. Would detailed insights about how to meet goals of the Imaginary Company or the Fictional Business be possible if a survey or focus group was required as the research strategy? Perhaps some, but definitely not as focused and not at the level of depth the described alternative techniques afford.

Requiring that a particular data collection method must be used is like putting the lowest quality gasoline into a Ferrari. It will get the vehicle moving, but will not take it to its full potential and afford the driver the best possible ride. The same goes for research: dictating what methods should be used to gain user intelligence can limit the scope and the amount of information researchers can gather. Hence, list your business objectives, sit back, and let the experts collect data by their chosen methods to inform and drive your business strategy.


This blog post was originally published here