Skip to main content

Sample Recommendation with Explainable Results

Abstract

Sample Recommendation with Explainable Results

This section will walk you through another sample recommendation process to even better illustrate the actual procedure as well as path of reasoning from input to results. Keep in mind that explanations appearing in the very last response are concepts with their computed score that are contained in the recommended document and in the search (i.e. the input of the recommendation call).

Since we have already determined our search space, corpus, minimum corpus score as well as the actual input text. In our sample call this text is "Renewable energy reduces carbon emissions.", we only need to few more parameters for the extract request; these are: maximum number of concepts, filtering of nested concepts, using of corpus score, shadow concepts and disambiguation as well as showing the matching information.

The API returns in response to this call the following: information whether the extraction was successful or not, then the list of extracted concepts and in our case also all the identified shadow concepts.

All concepts including shadow concepts are listed with their URI, label, score and text values with their respective score.

Our next call is an expand request; here you see again our search space (the same as for extract call) and then list of concepts which are however now only represented by their URI and score, followed by actual expansion query which in our case uses the narrower relation to create semantic footprint.

In the expand request we also specify the maximum number of concepts. We use here the concepts from our extract response.

The response to the expansion query contains information whether the query was successful and if so lists the expanded concepts showing for each of them the label, URI and their score.

Our last call is the recommend request - here we are using the same search space, the same input text ("Renewable energy reduces carbon emissions.") along with all of the concepts returned by the expansion call as well as all shadow concepts - for each concept you see its label, URI and score. Concepts are ranked by their score, from the highest to the lowest.

Now the system returns a recommend response which informs us whether the call has been successful. Here we see our results listed on the first page (which is page 0 since we are using zero-based indexing in the request call). We see all the details for each of the recommendation results: the ID (which is unique, and here an URL), the title, a description, then a link, date, author followed by the list of concepts which were identified in each of the results (i.e. recommended documents). This is followed by an explanation along with total score and matching concepts.

As you can clearly see these calls use the same search space for all the requests, the extracted concepts (as well as in our case shadow concepts) are then used in the expansion request, and all concepts including those returned by the expansion call are subsequently included in the recommend request.

Explanations are concepts contained in the document and in the search (i.e. the input of the recommendation call). Our example lists for example the concept with the uri": "https://esg.poolparty.biz/esg-core/76110900-8377-49d3-b8b9-c86f46927258 as one of the explanation concepts for our last recommendation along with its computed score of 627.731 and the explanation's total score of 2965.497 which is the score computed for the recommended document, i.e. the result of the call.

Note

Please note that https://esg.poolparty.biz/esg-core/ is not publicly available.

For more information on shadow concepts, filtering nested concepts and disambiguation go to the following sections of our documentation: