User talk:Harikrishna/Architecture/Learning Model

Page contents not supported in other languages.
From KDE TechBase

Two threads

Sven Schwarz says: According to the current state of the art in context research, this can be divided into two threads:

1. "Perception" (or "user observation")
---> Goal: Gather contextual evidences
---> Realization: User observation modules/components that observe the user's behaviour (desktop actions), track his current location, emotion, whatever more... That's what is what is done with [User Observation Hub]

2. "Context Awareness" (or "context elicitation")
---> Goal: Estimate and model the user's context(s)
---> Realization: The best way of doing this is not yet settled. But by picking and harvesting low-hanging fruits you can already achieve a lot. The most important thing here is to clarify WHAT IS THAT "CONTEXT" ANYWAY? So: Let's start with defining WHAT the context is USED FOR. Your ideas (in previous emails) went in the right direction. Then go over to define, HOW can we MODEL this in an efficient way. At last, we implement some algorithms that fill and update this context model. As I said, the latter part is not yet settled, but don't worry about that now. It will live!

Overall Process

Leo Sauermann says:
1. applications observe the activities of the user and communicate them, to other applications and to a central "user work context" daemon that gathers activities.
also, all activities may be logged for later use for example: the user opened file "My Plan to get famous" in KWrite and starts editing

2. daemons report background context
for example: a GPS daemon connected to a GPS mouse sends the position of the user as context information

3. the "user work context" daemon receives these messages and computes what the user is probably doing, this is important to have a daemon for this, as user often press alt-tab and you need something clever to keep track of what the user is really doing

Associating resources with Context

Leo Sauermann says: Each MediumTermContextThread has probability values (between 0..1) for all possible elements in the user's life. For most of them, the value is of course 0.

Suppose, the user is in the mediumtermContextThread "Working on my CID project", when Dirk calls me (he is related to CID and has a probability > 0 on this contextthread). Dirk gets "boosted" because he seems to be relevant, his probabilty (in this contexthread) increases.

I get a second NOP (native operation) message saying that a chat comes with contact "Charlie the unicorn", which is not related to CID. Charly talks about going to Candy Mountain.

Now comes the tricky part we didn't solve really:

  • either we switch to a new mediumtermContext thread "Going to Candy Mountain with Charlie" and boost Charlie there or
  • we ignore this switch and just add Charlie to the CID context, with a low probability

The tricky part is tricky. It will probably involve some visible feedback to the user such as "is charlie and this message in your current activity "CID Prject"?. From Sven Schwarz work, we would like to be able to set explicit changes.

Else, the system assumes that everything you touch is associated with the current work context thread, and boosts the concept there. If the probability that the clicked resource is related to the current context thread is very small, a new context thread is automatically started, or existing other context threads are searched with possible resources.

In practice, we have a half-hearted implementation for that based on a matrix, by Sven Schwarz. Everytime i say "medium term context thread" I mean that this is the stateful variable/matrix managed by the UserWorkContext daemon.


Two Learning modes

Hari krishna says: I assume that till now, we are talking of a system that starts with a "zero" understanding of the user and tries to learn how to categorize things by asking the user about every assumption it makes. Once it starts to reasonably understand the user's habits, it would start to give relevant results...

I think such a system of learning from scratch seems to have a lot of disadvantages:

1. It needs excessive training from the user, where he might have to correct every interpretation that the system makes. This learning phase where the system learns by asking the user every single question is just too tiring (practically users do not see its benefit unless they do it for a long time) and does not cover the corner cases. So, assuming that the user will diligently train the system for an extended period is not practical. Most users are accustomed to caring for their pets, not their computers !

2. If the user somehow trains it wrongly, the system become entirely useless, resulting in total frustration from the user that all his effort has been wasted.

3. The user does not know when he can start trusting the system and the system cannot know when it can offer reliable recommendations. So, this results in the system showing wrong results during the initial stages, which might result the user in losing trust in the system.

So, here is what we can do: Allow the user to configure things like associated contacts beforehand (it should not take more than a few minutes!), so that the system has a baseline understanding of what the user wants, and starts giving meaningful suggestions/concepts immediately after the configuration. After that, it can start listening to the user actions and try to learn from it. so, even if the system is trained completed wrongly during the learning period, at least a part of it is still relevant as it understands the baseline configuration which the user himself configured.

If the user is too lazy to do that, then we can always have the mode when the system can start learning from scratch.

All I ask for is that both the options must be available to the user and that the system must be able to handle both the situations