Research has shown little change in search behavior the last years. Search engines have improved a lot, people haven’t. How come? Let us use teachings from the psychology of learning to shed some light on this.
Why is there no learning?
Internet has changed our behavior in most areas of life, but we still perform our Google searches almost the same way as we did 10 years ago - using few, and often too broad terms. Is it because of laziness, that we have a poor understanding of search systems, or because every search act is unique so there is little knowledge transfer from previous experience? It is argued that most search systems by their design lead to learning by trial and error rather than learning by insight, and that this explains the slow change in behavior.
The paradox of simplicity
Product development is often driven by the need for a more comfortable user experience, for example cars having window wipers that turn on automatically when it starts to rain. It is a paradox that products themselves get more complex because we want them to be easier to use. The result can sometimes be the opposite of what was intended if the need for simplicity cripples our understanding on how to use it.
Is it important to understand how things work?
You don’t have to know anything about the profession of editing to enjoy a good movie. Sometimes it can be a good thing not to understand how things work, for instance when you watch a magician performing a trick.
In the examples above the user plays a passive role (even if the magician can fool you to believe you have influence on the outcome). In interactive systems such as applications and Internet websites, users must have an understanding on how the system works to be able to use it effectively, at least on a functional level.
Search engines are indeed interactive systems, but for most of us they appear as a ”black box” in the sense that we only have rudimentary knowledge of what happens inside it.
Is knowledge of the internal properties of search systems of importance to the users? Well, for some designers, "don´t make me think" is an ideal where the ultimate goal is to remove all cognitive load. For others, a successful design means utilising our higher cognitive capacities and great ability to learn to create better outcomes.
Learning by trial and error
Not knowing how things work often means people have to learn by trial and error. In the context of searching, the trial and error process starts off with the user typing more or less random search strings. Some of them give good results, some not. Successful behavior is reinforced and tends to get repeated. According to behaviorism, genuine understanding of the system is not necessary for learning to happen; it is enough for the user to know that certain responses tends to be rewarded. The hallmark of learning by trial and error is that the learning curve shows slow and incremental improvement, indeed very similar to the general development in search behavior the last years. But let us not jump to conclusions.
Learning by insight
Insight is a central part of Gestalt theory and has been defined as “learning that occurs rapidly, is remembered for a considerable time and transfers readily to situations related to the one in which the insightful learning took place”. Having insight means that a user has a genuine understanding on how a system works.
There are a lot of things that make learning by insight difficult in the context of search. First, the users must know something about the information space that is being searched, and not only the properties of the search engine itself. This means that each search act has the potential of being a novel experience for the user. Secondly, the users have to relate to many different search engines, which make knowledge transfer even more difficult.
Since insight is so important to gain, the key question is: How can search systems be designed to facilitate learning by insight? I will discuss three properties of search systems that are all central to this question, namely transparency, feedback and convention.
From black box to transparency
A real challenge when designing search interfaces is to make transparent the relevant aspects of the system while hiding complexities that is of no use to the user. It does not facilitate learning to tell the user how fast the search engine managed to retrieve the results, or what kind of algorithm the system used to correct spelling errors.
Some aspects of search engines can be hard to explain for the user, for instance the difference between keyword search (works well with Google) and semantic search (does not work well with Google). How do you make that difference clear for the users? Some vendors have solved the problem by doing keyword search when the user types few words and semantic search when longer sentences are used as search strings, without letting the user know the difference. The strategy of allowing several search behaviors without showing the technical solution that deals with it can often be a good strategy.
As mentioned before, understanding the information space forms a very important part of the search experience. While technical aspects of the search engine can be hidden, properties of the information space should not. It is a paradox that the difficulty of describing the information space has nothing to do with the size of it. Google for instance has an information space that is easy to understand, because it contains “everything” (relative to site-specific information that is).
Searching single websites is in fact more difficult that searching the entire web because the user has a poorer understanding of the extension of the information space. This limited understanding often lead to the strategy of using broader terms than necessary when searching. It is important to help the user narrowing the search, for instance by displaying the taxonomy of the site, offering to the user more specific and precise terms.
A problem with search systems in general is that the user gets too much feedback (“your search returned 12000 hits…”). This means that the reinforcing mechanisms get blurred - is it a god or a bad thing to get many hits? A general problem is that users apply too broad terms when searching. This can also be a result of bad feedback, resulting in a fear of not getting any hits. Relevant and informative feedback when the users get 0 hits is therefore very important to the learning process. Feedback could also be given for a larger set of search entries, and not only to single responses. If there is consistency in the way the user searches, the probability of giving correct feedback will increase.
It may sound obvious, but learning transfer from one search system to the other would be easier if they performed in a similar way. This is very important in the context of the web, where users in general are not willing to invest much time to learn new principles. Today there is no Internet search system that completely shares conventions with another system.
Search systems today often appear as a black box, and should be designed to support learning by insight, by bulding in properties like transparency, feedback and convention.
Follow me on Twitter: @JorgenDalen