When you search for information in a search engine such as Google, a list of results is displayed for you to explore further. This process is called Information Retrieval. The contents of the search results are collected based on certain criteria that are catered to you such as: past search history to match your interests, geographic location to relate to what is relevant based on your physical location, and advertising that has been targeted to match your interests and geographic location. These criteria are coded into algorithms to automate the information retrieval process catered to your needs, or what you would potentially consider to be relevant.
For example, let’s say you are searching for information about the healthiness of coffee and you search for “is coffee good for your health.” You may be looking for information that confirms your belief about the benefits of coffee, or you may be simply asking the question “whether coffee is good or bad for your health.” If you asked this question to a human who was an expert in facts about coffee you would likely get an answer that weighs the benefits and harms of coffee. Ideally, when you enter this same question in a search engine, it should return both the goodness and badness about coffee.
Unfortunately, searching for “is coffee good for your health” and “is coffee good for your health” will return a different set of information that is catered to your needs, and not the question as a whole. As a result, catering specifically to the user can create bias. If you are only seeing information that relates to what you are already interested in, or what is geographically near you, there are other perspectives that are intentionally filtered out of the results list.
So, how can we improve the algorithms that are used in the information retrieval process to incorporate more perspectives to reduce bias?
InfoSeeker Ruoyuan Gao is currently working on addressing the presence of bias found in search engine results. Currently, she is exploring several strategies to investigate the relationship between information usefulness and fairness within search engines such as Google. She proposes developing tools to measure the degree of bias in order to create a more balanced list of search results that includes many relevant perspectives for a search topic.