Some applications need to push large quantities of data into Nepomuk. They are typically called "feeder" applications as they provide Nepomuk with the data it requires. A database is only as powerful as the data it holds.
While one can use the
Resource class to push the data. It'll be slow as the
Resource class is synchronous and writes back into the database after each command. What one requires is an asynchronous API to push the application can just write all the data, and then Nepomuk can process and merge all of the data provided with its internal database.
Applications can use the
SimpleResource class to model the data that they want to push. The
SimpleResource class is not connected to the Nepomuk database, and is just a convenience wrapper around a
QMultiHash. Any changes made to these SimpleResources are not reflected back to the database, unless explicitly specified.
An example -
Nepomuk2::SimpleResource coldplay; coldplay.addType( NCO::Contact() ); coldplay.addProperty( NCO::fullname(), "Coldplay" ); Nepomuk2::SimpleResource album; album.addType( NMM::MusicAlbum() ); album.addProperty( NIE::title(), "X&Y" ); Nepomuk2::SimpleResource fileRes; fileRes.addType( NFO::FileDataObject() ); fileRes.addType( NMM::MusicPiece() ); fileRes.addProperty( NMM::performer(), coldplay ); fileRes.addProperty( NMM::musicAlbum(), album ); fileRes.addProperty( NIE::url(), fileUrl ); fileRes.addProperty( NIE::title(), "What If" );
In the above example we wish to push data about a song "What If" by the popular english artist "Coldplay". We create a different SimpleResource for each resource that we want to push into Nepomuk, and then add the relevant metadata. These
SimpleResources can reference each other.
All of this data is currently just stored in memory in a hash table. In order to push the data into Nepomuk, we group it all together using a
SimpleResourceGraph. After which was can push the data by calling
Nepomuk2::SimpleResourceGraph graph; graph << coldplay << album << fileRes; KJob* job = graph.save();
The save operation returns a KJob which has already begun execution. This operation will continue asynchronously, and on completion will emit a signal on completion.
The completed signals also return the respective KJob. This job can then be checked for errors, which may have occurred if we tried to save invalid data. It is up to the programmer to make sure that the data is valid. Invalid valid data is completely ignored and an error is given.
The storeResources function is a lengthy procedure that has performs multiple operations on the data after which it pushes the data into Nepomuk. The two main parts of the job are outlined below.
Each SimpleResource contains a uri, which is either an actual uri of the form
nepomuk:/res/some-unique-identifier or is a blank uri of the form
_:identifier. The SimpleResources which contain resource uris can just directly be pushed into Nepomuk. The blank uris require some additional processing.
Each SimpleResource with a blank uri needs to be translated to a corresponding nepomuk resource uri, if that resource already exists. Otherwise a new resource needs to be created. This translation process is called resource identification. It is performed using the properties specified in the SimpleResource.
Certain properties in the ontologies are marked as defining properties. The criteria is decided as follows -
- Properties with a literal range are always defining, unless explicitly marked as a nrl:NonDefiningProperty
- Properties with a resource range are always NOT defining, unless explicitly marked with nrl:DefiningProperty
Two resources are said to match each other if the following criteria are met -
- Their list of
- The resources do not have any defining properties which do not match.
- At least one defining property matches.
If the following resource already exists in the Nepomuk Repository -
<nepomuk:/res/A> rdf:type nco:PersonContact ; nco:fullname "Peter Parker" ; nco:gender nco:male .
And then the following data is pushed -
SimpleResource peter; peter.addType( NCO::PersonContact() ); peter.setProperty( NCO::fullname(), QLatin1String("Peter Parker") ); SimpleResource spiderman; spiderman.addType( NCO::PersonContact() ); spiderman.setProperty( NCO::fullname(), QLatin1String("Spiderman") ); spiderman.setProperty( NCO::gender(), NCO::male() );
In this case
peter will be mapped to
nepomuk:/res/A since it has the same type and all the identifying properties match (nco:fullname). It doesn't matter that nco:gender does not match, as the
peter doesn't specify a gender. If in a alternative universe
peter was specified as a
nco:female in the
peter would not have been mapped to
spiderman does not match any existing contacts, so a new resource with a uri of the form
nepomuk:/res/uuid is created with the specified properties. That uri can be fetched as follows
simpleResourceJob->mappings( spiderman.uri() )
Once the identification process has been completed, each SimpleResource goes through a series of checks which check if the domain, range and carnality of properties is correct, and then pushes the data into the database after merging the graphs for the statements that already exist, and creating a new graph for the new statements.
1. <Property> has a max cardinality of <value>. Provided <n> values - <list>. Existing - <list>
The error indicates that you're not following the cardinality restrictions that are present in the ontologies. For example nco:fullname has a max cardinality 1. That means that any resource can at max have one full name. You have probably given your SimpleResource Contact two full names.
2. <Property> has rdfs:domain/rdfs:range of <Type>. <Resource> only has the following types
If <Resource> is of the form
_:identifier then it means that your SimpleResource with identifier <Resource> is missing the types given. Otherwise if it is of the form
nepomuk:/res/unique-uuid that implies that either your SimpleResource was identifier as <Resource> and that resource does not have the respective types, or that you are trying to map it to a resource which does not contain that type.
Using the data after pushing
In some applications you may need to access the data after you have pushed it into Nepomuk using
storeResources. Fortunately there is a convenient way to do that. The SimpleResourceJob provides a function calling mappings, which lets you map the SimpleResource uris to the actual nepomuk uris once they have been saved.
using namespace Nepomuk2::Vocabulary; SimpleResource email; email.addType( NCO::EmailAddress() ); email.addProperty( NCO::emailAddress(), QLatin1String("[email protected]") ); SimpleResource contact; contact.addType( NCO::Contact() ); contact.setProperty( NCO::fullname(), QLatin1String("Peter Parker") ); contact.addProperty( NCO::hasEmailAddress(), email ); SimpleResourceGraph graph; graph << contact << email; StoreResourcesJob* job = graph.save(); job->exec(); QASSERT( !job->error() ); QUrl emailUri = job->mappings().value( email.uri() ); QUrl contactUri = job->mappings().value( contact.uri() );
email.uri() function will return a uri of the form
_:identifier. Same is the case with
StoreResourcesJob::mappings returns a
QHash which maps these blank uris to their respective nepomuk uris. They can then be used as follows -
Nepomuk2::Resource contactRes( contactUri ); const QString fullname = contactRes.property( NCO::fullname() ).toString();
Who else is using it?
The SimpleResource API is currently the de facto method of pushing data into Nepomuk. It is being heavily utilized by our own file indexer, and KDE PIM. PIM uses the SimpleResource api in order to push emails, contacts and event information into Nepomuk.
For more examples on how to use SimpleResource, we suggest you look at our comprehensive tests present in the datamanagementmodel. Add link!!
Most developers do not need to worry about graphs present in Nepomuk. However, for the sake of completion we're documenting what happens internally. Hopefully, this will help you better understand the intricacies on Nepomuk.
SimpleResourceGraph is saved or passed onto
storeResources, each statement in the graph is checked for existance in the database. If that triple already exists, it is set aside and specially handled. All other triples are pushed into this one big graph that is created with each call to
That graph contains the following data -
<nepomuk:/ctx/some-graph> a nrl:Graph .
get some data!!