All posts by Sandra Praetor

Who are these DBpedia users ? …(and why ? )

Guest article by Victor de Boer, Vrije Universiteit Amsterdam, NL, member of NL-DBpedia

Who uses DBpedia anyway?…

This question started a research project for Frank Walraven, an Information Sciences Master student at Vrije Universiteit Amsterdam (VUA). The question came up during one of the meetings of the Dutch DBpedia chapter, of which VUA is a member.

If DBpedia users and their usage are better understood, this can lead to better servicing of those Dbpedia users by, for example, prioritizing the enrichment or improvement of specific sections of DBpedia. Characterizing use(r)s of a Linked Open Dataset is an inherently challenging task because in an open web world it is difficult to tell who is accessing your digital resources.

Frank conducted his MSc project research at the Dutch National Library  and used a hybrid approach utilizing both, a data-driven method based on user log analysis and a short survey to get to know the users of the dataset.

 As a scope, Frank selected just the Dutch DBpedia dataset. For the data-driven part of the method, Frank used a complete user log of HTTP requests on the Dutch DBpedia. This log file consisted of over 4.5 Million entries and logged both URI lookups and SPARQL endpoint requests. For this research, he only included a subset of the URI lookups.

Analysis of IP- Addresses od DBpedia Users

As a first analysis step, the requests’ origins IPs were categorized. Five classes can be identified (A-E), with the vast majority of IP addresses being in class “A”: Very large networks and bots. Most of the IP addresses in these lists could be traced back to search engine indexing bots such as those from Yahoo or Google. In classes B-F, Frank manually traced the top 30 most encountered IP-addresses. He concluded that even there 60% of the requests came from bots, 10% definitely not from bots, with 30% remaining unclear.

 

 

 

Step II – Identification of Page Requests

The second analysis step in the data-driven method consisted of identifying what types of pages were most requested. To cluster the thousands of DBpedia URI request, Frank retrieved the ‘categories’ of the pages. These categories are extracted from Wikipedia category links. An example is the “Android_TV” resource, which has two categories: “Google” and “Android_(operating_system)”. Following skos:broader links, a ‘level 2 category’ could also be found to aggregate to an even higher level of abstraction. As not all resources have such categories, this does not give a complete image, but it does provide some ideas on the most popular categories of items requested. After normalizing for categories with large amounts of incoming links, for example, the category “non-endangered animal”, the most popular categories where

  • 1. Domestic & International movies,
  • 2. Music,
  • 3. Sports,
  • 4. Dutch & International municipality information and
  • 5. Books.
 Survey

Additionally, Frank set up a user survey to corroborate this evidence. The survey contained questions about the how and why of the respondents use of the Dutch DBpedia, including the categories they were most interested in.

The survey was distributed using the Dutch DBpedia website and via Twitter. However, the endeavour only attracted 5 respondents. This illustrates the difficulty of the problem that users of the DBpedia resource are not necessarily easily reachable through communication channels. The five respondents were all quite closely related to the chapter but the results were interesting nonetheless. Most of the DBpedia users used the DBpedia SPARQL endpoint. The full results of the survey can be found through Frank’s thesis, but in terms of corroboration, the survey revealed that four out of the five categories found in the data-driven method were also identified in the top five results from the survey. The fifth one identified in the survey was ‘geography’, which could be matched to the fifth from the data-driven method.

Conclusion

Frank’s research shows that it remains a challenging problem, using a combination of data-driven and user-driven method. Yet,  it is indeed possible to get an indication into the most-used categories on DBpedia. Within the Dutch DBpedia Chapter, we are currently considering follow-up research questions based on Frank’s research. For further information about the work of the Dutch DBpedia chapter, please visit their website. 

A big thanks to the Dutch DBpedia Chapter for supervising this research and providing insights via this post.

Yours

DBpedia Association

The Release Circle – A Glimpse behind the Scenes

As you already know, with the new DBpedia strategy our mode of publishing releases changed.  The new DBpedia release process follows a three-step approach starting from the Extraction to ID-Management towards the Fusion, which finalizes the release process.  Our DBpedia releases are currently published on a monthly basis. In this post, we give you insight into the single steps of the release process and into what our developers actually do when preparing a DBpedia release.

Extraction  – Step one of the Release

The good news is, our new release mode is taking shape and noticeable picked up speed. Finally the 2018-08 and, additionally the 2018.09.12 and the 2018.10.16 Releases are now available in our LTS repository.

The 2018-08 Release was generated on the basis of the Wikipedia datasets extracted in early August and currently comprises 136 languages. The extraction release contains the raw extracted data generated by the DBpedia extraction-framework. The post-processing steps, such as data-deduplication or URI-normalization are omitted and moved to later parts of the release process. Thus, we can provide direct, transparent access to the generated data in every step. Until we manage two releases per month, our data is mostly based on the second Wikipedia datasets of the previous month. In line with that, the 2018.09.12 release is based on late August data and the recent 2018.10.16 Release is based on Wikipedia datasets extracted on September 20th. They all comprise 136 languages and contain a stable list of datasets since the 2018-08 release.

Our releases are now ready for parsing and external use. Additionally, there will be a new Wikidata-based release this week.

ID-Management – Step two of the Release

For a complete “new DBpedia” release the DBpedia ID-Management and Fusion of the data have to be added to the process. The Databus ID Management is a process to unify various different IRIs identifying the same entities coined from different data providers. Taking datasets with overlapping domains of interest from multiple data providers, the set of IRIs denoting the entities in the source datasets are determined heuristically (e.g. excluding RDF/OWL types/classes).

Afterwards, these selected IRIs a numeric primary key, the ‘Singleton ID’. The core of the ID Management process happens in the next step: Based on the large set of owl:sameAs assertions in the input data with high confidence, the connected components induced from the corresponding sameAs-graph is computed. In other words: The groups of all entities from the input datasets (transitively) reachable from one to another are determined. We dubbed these groups the sameAs-clusters. For each sameAs-cluster we pick one member as representant, which determines the ‘Cluster ID’ or ‘Global Identifier’ for all cluster members.

Apart from being an essential preparatory step for the Fusion, these Global Identifiers serve purpose in their own right as unified Linked Data identifiers for groups of Linked Data entities that should be viewed as equivalent or ‘the same thing’.

A processing workflow based on Apache Spark to perform the process described on above for large quantities of RDF input data is already in place and has been run successfully for a large set of DBpedia inputs consisting of:

 

Fusion – Step three of the Release

Based on the extraction and the ID-Management, the Data Fusion finalizes the last step of the  DBpedia release cycle. With the goal of improving data quality and data coverage, the process uses the DBpedia global IRI clusters to fuse and enrich the source datasets. The fused data contains all resource of the input datasets. The fusion process is based on a functional property decision to decide the number of selected values ( owl:FunctionalProperty determination ). Further, the value selection for this functional properties is based on a preference dependent on the originated source dataset. For example, preferred values for En-DBpedia over DE-DBpedia.

The enrichment improves entity-properties and -values coverage for resources only contained in the source data. Furthermore, we create provenance data to keep track of the origin of each triple. This provenance data is also used for the http-based http://global.dbpedia.org resource view.

At the moment the fused and enriched data is available for the generic, and mapping-based extractions. More datasets are still in progress.  The DBpedia-fusion data is uploading to http://downloads.dbpedia.org/repo/dev/fusion/

Please note we are still in the midst of the beta testing for our data release tool, so in case you do come across any errors, reporting them to us is much appreciated to fuel the testing process.

Further information regarding the releases progress can be found here: http://dev.dbpedia.org/

Next steps

We will add more releases to the repository on a monthly basis aiming for a bi-weekly release mode as soon as possible. In between the intervals, any mistakes or errors you find and report in this data can be fixed for the upcoming release. Currently, the generated metadata in the DataID-file is not stable. This will fluctuate, still needs to be improved and will change in the near future.  We are now working on the next release and will inform you as soon as it is published.

Yours DBpedia Association

This blog post was written with the help of our DBpedia developers Robert Bielinski, Markus Ackermann and Marvin Hofer who were responsible for the work done with respect to the DBpedia releases. We like to thank them for their great work. 

 

Retrospective: GSoC 2018

With all the beta-testing, the evaluations of the community survey part I and part II and the preparations for the Semantics 2018 we lost almost sight of telling you about the final results of GSoC 2018. Following we present you a short recap of this year’s students and projects that made it to the finishing line of GSoC 2018.

 

Et Voilà

We started out with six students that committed to GSoC projects. However, in the course of the summer, some dropped out or did not pass the midterm evaluation. In the end, we had three finalists that made it through the program.

Meet Bharat Suri

… who worked on “Complex Embeddings for OOV Entities”. The aim of this project was to enhance the DBpedia Knowledge Base by enabling the model to learn from the corpus and generate embeddings for different entities, such as classes, instances and properties.  His code is available in his GitHub repository. Tommaso Soru, Thiago Galery and Peng Xu supported Bharat throughout the summer as his DBpedia mentors.

Meet Victor Fernandez

.. who worked on a “Web application to detect incorrect mappings across DBpedia’s in different languages”. The aim of his project was to create a web application and API to aid in automatically detecting inaccurate DBpedia mappings. The mappings for each language are often not aligned, causing inconsistencies in the quality of the RDF generated. The final code of this project is available in Victor’s repository on GitHub. He was mentored by Mariano Rico and Nandana Mihindukulasooriya.

Meet Aman Mehta

.. whose project aimed at building a model which allows users to query DBpedia directly using natural language without the need to have any previous experience in SPARQL. His task was to train a Sequence-2-Sequence Neural Network model to translate any Natural Language Query (NLQ) into the corresponding sentence encoding SPARQL query. See the results of this project in Aman’s GitHub repositoryTommaso Soru and Ricardo Usbeck were his DBpedia mentors during the summer.

Finally, these projects will contribute to an overall development of DBpedia. We are very satisfied with the contributions and results our students produced.  Furthermore, we like to genuinely thank all students and mentors for their effort. We hope to be in touch and see a few faces again next year.

A special thanks goes out to all mentors and students whose projects did not make it through.

GSoC Mentor Summit

Now it is the mentors’ turn to take part in this year GSoC mentor summit, October 12th till 14th. This year, Mariano Rico and Thiago Galery will represent DBpedia at the event. Their task is to engage in a vital discussion about this years program, about lessons learned, highlights and drawbacks they experienced during the summer. Hopefully, they return with new ideas from the exchange with mentors from other open source projects. In turn, we hope to improve our part of the program for students and mentors.

Sit tight, follow us on Twitter and we will update you about the event soon.

Yours DBpedia Association

DBpedia Chapters – Survey Evaluation – Episode Two

Welcome back to part two of the evaluation of the surveys, we conducted with the DBpedia chapters.

Survey Evaluation – Episode Two

The second survey focused on technical matters. We asked the chapters about the usage of DBpedia services and tools, technical problems and challenges and potential reasons to overcome them.  Have a look below.

Again, only nine out of 21 DBpedia chapters participated in this survey. And again, that means, the results only represent roughly 42% of the DBpedia chapter population

The good news is, all chapters maintain a local DBpedia endpoint. Yay! More than 55 % of the chapters perform their own extraction. The rest of them apply a hybrid approach reusing some datasets from DBpedia releases and additionally, extract some on their own.

Datasets, Services and Applications

In terms of frequency of dataset updates, the situation is as follows:  44,4 % of the chapters update them once a year. The answers of the remaining ones differ in equal shares, depending on various factors. See the graph below. 

 

 

 

 

 

 

 

When it comes to the maintenance of links to local datasets, most of the chapters do not have additional ones. However, some do maintain links to, for example, Greek WordNet, the National Library of Greece Authority record, Geonames.jp and the Japanese WordNet. Furthermore, some of the chapters even host other datasets of local interest, but mostly in a separate endpoint, so they keep separate graphs.

Apart from hosting their own endpoint, most chapters maintain one or the other additional service such as Spotlight, LodLive or LodView.

 

 

 

 

 

 

 

Moreover,  the chapters have additional applications they developed on top of DBpedia data and services.

Besides, they also gave us some reasons why they were not able to deploy DBpedia related services. See their replies below.

 

 

 

 

 

 

 

 

 

DBpedia Chapter set-up

Lastly, we asked the technical heads of the chapters what the hardest task for setting up their chapter had been.  The answers, again, vary as the starting position of each chapter differed. Read a few of their replies below.

The hardest technical task for setting up the chapter was:

  • to keep virtuoso up to date
  • the chapter specific setup of DBpedia plugin in Virtuoso
  • the Extraction Framework
  • configuring Virtuoso for serving data using server’s FQDN and Nginx proxying
  • setting up the Extraction Framework, especially for abstracts
  • correctly setting up the extraction process and the DBpedia facet browser
  • fixing internationalization issues, and updating the endpoint
  • keeping the extraction framework working and up to date
  • updating the server to the specific requirements for further compilation – we work on Debian

 

Final  words

With all the data and results we gathered, we will get together with our chapter coordinator to develop a strategy of how to improve technical as well as organizational issues the surveys revealed. By that, we hope to facilitate a better exchange between the chapters and with us, the DBpedia Association. Moreover, we intend to minimize barriers for setting up and maintaining a DBpedia chapter so that our chapter community may thrive and prosper.

In the meantime, spread your work and share it with the community. Do not forget to follow and tag us on Twitter ( @dbpedia ). You may also want to subscribe to our newsletter.

We will keep you posted about any updates and news.

Yours

DBpedia Association

DBpedia Chapters – Survey Evaluation – Episode One

DBpedia Chapters – Challenge Accepted

The DBpedia community currently comprises more than 20 language chapters, ranging from  Basque, Japanese to Portuguese and Ukrainian. Managing such a variety of chapters is a huge challenge for the DBpedia Association because individual requirements are as diverse as the different languages the chapters represent. There are chapters that started out back in 2012 such as DBpediaNL. Others like the Catalan chapter are brand new and have different haves and needs.

So, in order to optimize chapter development, we aim to formalize an official DBpedia Chapter Consortium. It permits a close dialogue with the chapters in order to address all relevant matters regarding communication, organization as well as technical issues. We want to provide the community with the best basis to set up new chapters and to maintain or develop the existing ones.

Our main targets for this are to: 

  • improve general chapter organization,
  • unite all DBpedia chapters with central DBpedia,
  • promote better communication and understanding and,
  • create synergies for further developments and make easier the access to information about which is done by all DBpedia bodies

As a first step, we needed to collect information about the current state of things.  Hence, we conducted two surveys to collect the necessary information. One was directed at chapter leads and the other one at technical heads. 

In this blog-post, we like to present you the results of the survey conducted with chapter leads.  It addressed matters of communication and organizational relevance. Unfortunately, only nine out of 21 chapters participated, so the respective outcome of the survey speaks only for roughly 42% of all DBpedia chapters.

Chapter-Survey  – Episode One

Most chapters have very little personnel committed to the work done for the chapter, due to different reasons. 66 % of the chapters have only one till four people being involved in the core work. Only one chapter has about ten people working on it.

Overall, the chapters use various marketing channels for promotion, visibility and outreach. The website as well as event participation, Twitter and Facebook are among the most favourite channels they use. 

The following chart shows how chapters currently communicate organizational and communication issues in their respective chapter and to the DBpedia Association.

 

 

The second one explicit that ⅓ of the chapters favour an exchange among chapters and with the DBpedia Association via the discussion mailing list as well as regular chapter calls.

 

The survey results show that 66,6% of the chapters currently do not consider their current mode of communication efficient enough. They think that their communication with the DBpedia Association should improve.

 

As pointed out before, most chapters only have little personnel resources. It is no wonder that most of them need help to improve the work and impact of chapter results. The following chart shows the kind of support chapters require to improve their overall work, organization and communication. Most noteworthy, technical, marketing and organization support are hereby the top three aspects the chapters need help with. 

 

 

The good news is all of the chapters maintain a DBpedia Website. However, the frequency of updates varies among them. See the chart on the right.

 

 

 

Earlier this year, we announced that we like to align all chapter websites with the main DBpedia website. That includes a common structure and a corporate design, similar to the main one.  Above all, this is important for the overall image and recognition factor of DBpedia in the tech community. With respect to that, we inquired whether chapters would like to participate in an alignment of the websites or not.

 

 

 

With respect to marketing support, the chapters require from the Association, more than 50% of the chapters like to be frequently promoted via the main DBpedia twitter channel.

 

 

Good news: just forward us your news or tag us with @dbpedia and we will share ’em.

Almost there.

Finally, we asked about chapters requirements to improve their work and, the impact of their chapters’ results. 

 

Bottom line

All in all, we are very grateful for your contribution. Those data will help us to develop a strategy to work towards the targets mentioned above. We will now use this data to conceptualize a little program to assist chapters in their organization and marketing endeavours. Furthermore, the information given will also help us to tackle the different issues that arose, implement the necessary support and improve chapter development and chapter visibility.

In episode two, we will delve into the results of the technical survey. Sit tight and follow us on Twitter, Facebook, LinkedIn or subscribe to our newsletter.

Finally, one last remark. If you want to promote news of your chapter or otherwise like to increase its visibility, you are always welcome to:

  • forward us the respective information to be promoted via our marketing channels 
  • use your own Twitter channel and tag your post with @dbpedia,  so we can retweet your news. 
  • always use #dbpediachapters

Looking forward to your news.

Yours

DBpedia Association

Beta-Test Updates

While everyone at the DBpedia Association was preparing for the SEMANTiCS Conference in Vienna, we also managed to reach an important milestone regarding the beta-test for our data release tool.

First and foremost, already 3500 files have been published with the plugin. These files will be part of the new DBpedia release and are available on our LTS repository.

Secondly, the documentation of the testing has been brought into good shape. Feel free to drop by and check it out.
Thirdly, we reached our first interoperability goal. The current metadata is sufficient to produce RSS 1.0 feeds. See here for further information. We also defined a loose roadmap on top of the readme, where interoperability to DCAT and DCAT-AP has high priority.

 

Now we have some time to support you and work one on one and also prepare the configurations to help you set up the data releases. Lastly, we already received data from DNB and SUMO, so we will start to look into these more closely.

Thanks to all the beta-testers for your nice work.

We keep you posted.

Yours

DBpedia Association

Beta-tests for the DBpedia Databus commence

Finally, we are proud to announce that the beta-testing of our data release tool for data releases on the DBpedia Databus is about to start.

In the past weeks, our developers at DBpedia  have been developing a new data release tool to release datasets on the DBpedia Databus. In that context, we are still looking for beta-testers who have a dataset they wish to release.  Sign up here and benefit from an increased visibility for your dataset and your work done.

We are now preparing the first internal test with our own dataset to ensure the data release tool is ready for the testers. During the testing process, beta-testers will discuss occurring problems, challenges and ideas for improvement via the DBpedia #releases channel on Slack to profit from each other’s knowledge and skills. Issues are documented via GitHub.

The whole testing process for the data release tool follows a 4-milestones plan:

Milestone One: Every tester needs to have a WebID to release data on the DBpedia Databus. In case you are interested in how to set up a WebID, our tutorial will help you a great deal.

Milestone Two: For their datasets, testers will generate DataIDs, that provide detailed descriptions of the datasets and their different manifestations as well as relations to agents like persons or organizations, in regard to their rights and responsibilities.

Milestone Three: This milestone is considered as achieved if an RSS feed feature can be generated. Additionally, bugs, that arose during the previous phases should have been fixed. We also want to collect the testers particular demands and wishes that would benefit the tool or the process. A second release can be attempted to check how integrated fixes and changes work out.

Milestone Four: This milestone marks the final upload of the dataset to the DBpedia Databus which is hopefully possible in about 3 weeks.

For updates on the beta-test, follow this link.

Looking forward to working with you…

Yours,

DBpedia Association

 

PS: In case you want to get one of the last spots in the beta-testing team, just sign up here and get yourself a WebID and start testing.

DBpedia at LSWT 2018

Unfortunately, with the new GDPR, we experienced some trouble with our Blog. That is why this post is published a little later than anticipated.

There you go.

With our new strategic orientation and the emergence of the DBpedia Databus, we wanted to meet some DBpedia enthusiasts of the German DBpedia Community.

The recently hosted 6th LSWT (Leipzig Semantic Web Day) on June 18th, was the perfect platform for DBpedia to meet with researchers, industry and other organizations to discuss current and future developments of the semantic web.

Under the motto “Linked Enterprises Data Services”, experts in academia and industry talked about the interlinking of open and commercial data of various domains such as e-commerce, e-government, and digital humanities.

Sören Auer, DBpedia endorser and board member as well as director of TIB, the German National Library of Science and Technology, opened the event with an exciting keynote. Recapping the evolution of the semantic and giving a glimpse into the future of integrating more cognitive processes into the study of data,  he highlighted the importance of AI, deep learning, and machine learning. They are as well as cognitive data, no longer in their early stages but advanced to fully grown up sciences.

Shortly after, Sebastian Hellmann, director of the DBpedia Association, presented the new face of DBpedia as a global open knowledge network. DBpedia is not just the most successful open knowledge graph so far, but also has a deep inside knowledge about all connected open knowledge graphs (OKG) and how they are governed. 

With our new credo connecting data is about linking people and organizations, the global DBpedia platform aims at sharing efforts of OKG governance, collaboration, and curation to maximize societal value and develop a linked data economy.

 

The DBpedia Databus functions as Metadata Subscription Repository, a platform that allows exchanging, curate and access data between multiple stakeholders. In order to maximize the potential of your data, data owners need a WebID to sign their Metadata with a private key in order to make use of the full Databus services.  Instead of one huge monolithic release every 12 months the Databus enables easier contributions and hence partial releases (core, mapping, wikidata, text, reference extraction) at their own speed but in much shorter intervals (monthly). Uploading data on the databus means connecting and comparing your data to the network. We will offer storage services, free & freemium services as well as data-as-a-service.  A first demo is available via http://downloads.dbpedia.org/databus

During the lunch break, LSWT participants had time to check out the poster presentations. 4 of the 18 posters used DBpedia as a source. One of them was Birdory, a memory game developed during the Coding Da Vinci hackathon, that started in April 2018. Moreover, other posters also used the DBpedia vocabulary.

Afternoon Session

In the afternoon, participants of LSWT2018 joined hands-on tutorials on SPARQL and WebID. During the SPARQL tutorial, ten participants learned about the different query types, graph patterns, filters, and functions as well as how to construct SPARQL queries step by step with the help of a funny Monty Python example.

Afterwards, DBpedia hosted a hands-on workshop on WebID, the password-free authentication method using semantics. The workshop aimed at enabling participants to set up a public/private key, a certificate, and a WebID.  Everything they needed to bring was a laptop and an own webspace. Supervised by DBpedia’s executive director Dr. Sebastian Hellmann and developer Jan Forberg, people had to log-into a test web service at the end of the session, to see if everything worked out. All participants seemed well satisfied with the workshop –  even if not everyone could finish it successfully they got a lot of individual help and many hints. For support purposes, DBpedia will stay close in touch with those participants.

 

Thanks to Institut für Angewandte Informatik as well to the LEDS -project and eccenca for organizing LSWT2018 and keeping the local semantic web community thriving.

 

Upcoming Events:

We are currently looking forward to our next DBpedia meetup in Lyon, France on July 3rd and the DBpedia Day co-located with Semantics 2018 in Vienna. Contributions to both events are still welcome. Send your inquiry to dbpedia@infai.org.

 

Yours

 

DBpedia Association

 

The DBpedia Databus – transforming Linked Data into a networked data economy

Working with data is hard and repetitive. That is why we are more than happy to announce the launch of the alpha version of our DBpedia Databus, a way that simplifies working with data. 

We have studied the data network for already 10 years and we conclude that organizations with open data are struggling to work together properly. Even though they could and should collaborate, they are hindered by technical and organizational barriers. They duplicate work on the same data. On the other hand, companies selling data cannot do so in a scalable way. The consumers are left empty-handed and trapped between the choice of inferior open data or buying from a jungle-like market. 

We need to rethink the incentives for linking data

Vision

We envision a hub, where everybody uploads data. In that hub, useful operations like versioning, cleaning, transformation, mapping, linking, merging, hosting are done automagically on a central communication system, the bus, and then again dispersed in a decentral network to the consumers and applications.  On the Databus, data flows from data producers through the platform to the consumers (left to right), any errors or feedback flows in the opposite direction and reaches the data source to provide a continuous integration service and improves the data at the source.

The DBpedia Databus is a platform that allows exchanging, curating and accessing data between multiple stakeholders. Any data entering the bus will be versioned, cleaned, mapped, linked and its licenses and provenance tracked. Hosting in multiple formats will be provided to access the data either as dump download or as API.

Publishing data on the Databus means connecting and comparing your data to the network

If you are grinding your teeth about how to publish data on the web, you can just use the Databus to do so. Data loaded on the bus will be highly visible, available and queryable. You should think of it as a service:

  • Visibility guarantees, that your citations and reputation goes up.
  • Besides a web download, we can also provide a Linked Data interface, SPARQL-endpoint, Lookup (autocomplete) or other means of availability (like AWS or Docker images).
  • Any distribution we are doing will funnel feedback and collaboration opportunities your way to improve your dataset and your internal data quality.
  • You will receive an enriched dataset, which is connected and complemented with any other available data (see the same folder names in data and fusion folders).

 How it works at the moment

Integration of data is easy with the Databus. We have been integrating and loading additional datasets alongside DBpedia for the world to query. Popular datasets are ICD10 (medical data) and organizations and persons. We are still in an initial state, but we already loaded 10 datasets (6 from DBpedia, 4 external) on the bus using these phases:

  1.  Acquisition: data is downloaded from the source and logged in.
  2. Conversion: data is converted to N-Triples and cleaned (Syntax parsing, datatype validation, and SHACL).
  3. Mapping: the vocabulary is mapped on the DBpedia Ontology and converted (We have been doing this for Wikipedia’s Infoboxes and Wikidata, but now we do it for other datasets as well).
  4. Linking: Links are mainly collected from the sources, cleaned and enriched.
  5. IDying: All entities found are given a new Databus ID for tracking.
  6.  Clustering: ID’s are merged onto clusters using one of the Databus ID’s as cluster representative.
  7. Data Comparison: Each dataset is compared with all other datasets. We have an algorithm that decides on the best value, but the main goal here is transparency, i.e. to see which data value was chosen and how it compares to the other sources.
  8. A main knowledge graph fused from all the sources, i.e. a transparent aggregate.
  9. For each source, we are producing a local fused version called the “Databus Complement”. This is a major feedback mechanism for all data providers, where they can see what data they are missing, what data differs in other sources and what links are available for their IDs.
  10. You can compare all data via a web service.

Contact us via dbpedia@infai.org if you would like to have additional datasets integrated and maintained alongside DBpedia.

From your point of view

Data Sellers

If you are selling data, the Databus provides numerous opportunities for you. You can link your offering to the open entities in the Databus. This allows consumers to discover your services better by showing it with each request.

Data Consumers

Open data on the Databus will be a commodity. We are greatly downing the cost of understanding the data, retrieving and reformatting it. We are constantly extending ways of using the data and are willing to implement any formats and APIs you need. If you are lacking a certain kind of data, we can also scout for it and load it onto the Databus.

Is it free?

Maintaining the Databus is a lot of work and servers incurring a high cost. As a rule of thumb, we are providing everything for free that we can afford to provide for free. DBpedia was providing everything for free in the past, but this is not a healthy model, as we can neither maintain quality properly nor grow.

On the Databus everything is provided “As is” without any guarantees or warranty. Improvements can be done by the volunteer community. The DBpedia Association will provide a business interface to allow guarantees, major improvements, stable maintenance, and hosting.

License

Final databases are licensed under ODC-By. This covers our work on recomposition of data. Each fact is individually licensed, e.g. Wikipedia abstracts are CC-BY-SA, some are CC-BY-NC, some are copyrighted. This means that data is available for research, informational and educational purposes. We recommend to contact us for any professional use of the data (clearing) so we can guarantee that legal matters are handled correctly. Otherwise, professional use is at own risk.

Current Statistics

The Databus data is available at http://downloads.dbpedia.org/databus/ ordered into three main folders:

  • Data: the data that is loaded on the Databus at the moment
  • Global: a folder that contains provenance data and the mappings to the new IDs
  • Fusion: the output of the Databus

Most notably you can find:

  • Provenance mapping of the new ids in global/persistence-core/cluster-iri-provenance-ntriples/<http://downloads.dbpedia.org/databus/global/persistence-core/cluster-iri-provenance-ntriples/> and global/persistence-core/global-ids-ntriples/<http://downloads.dbpedia.org/databus/global/persistence-core/global-ids-ntriples/>
  • The final fused version for the core: fusion/core/fused/<http://downloads.dbpedia.org/databus/fusion/core/fused/>
  • A detailed JSON-LD file for data comparison: fusion/core/json/<http://downloads.dbpedia.org/databus/fusion/core/json/>
  • Complements, i.e. the enriched Dutch DBpedia Version: fusion/core/nl.dbpedia.org/<http://downloads.dbpedia.org/databus/fusion/core/nl.dbpedia.org/>

(Note that the file and folder structure are still subject to change)

Sources

 

Upcoming Developments

Data market
  • build your own data inventory and merchandise your data via Linked Data or via secure named graphs in the DBpedia SPARQL Endpoint (WebID + TLS + OpenLink’s  Virtuoso database)
DBpedia Marketplace
  • Offer your Linked Data tools, services, products
  • Incubate new research into products
  • Example: Support for RDFUnit (https://github.com/AKSW/RDFUnit created by the SHACL editor), assistance with SHACL writing and deployment of the open-source software

 

DBpedia and the Databus will transform Linked Data into a networked data economy

 

For any questions or inquiries related to the new DBpedia Databus, please contact us via dbpedia@infai.org

 

Yours,

DBpedia Association

DBpedia supports young developers

Supporting young and aspiring developers has always been part of DBpedia‘s philosophy. Through various internships and collaborations with programmes such as Google Summer of Code, we were able to not only meet aspiring developers but also establish long-lasting relationships with these DBpedians ensuring a sustainable progress for and with DBpedia.  For 6 years now, we have been part of Google Summer of Code, one of our favorite programmes. This year, we are also taking part in Coding da Vinci, a German-based cultural data hackathon, where we support young hackers, coders and smart minds with DBpedia datasets.

DBpedia at Google Summer of Code 2018

This year, DBpedia will participate for the sixth time in a row in the Google Summer of Code program (GSoC). Together with our amazing mentors, we drafted 9 project ideas which GSOC applicants could apply to. Since March 12th, we received many proposal drafts out of which 12 final projects proposals have been submitted. Competition is very high as student slots are always limited. Our DBpedia mentors were critically reviewing all proposals for their potential and for allocating them one of the rare open slots in the GSoC program. Finally, on Monday, April 23rd, our 6 finalists have been announced. We are very proud and looking forward to the upcoming months of coding. The following projects have been accepted and will hopefully be realized during the summer.

Our gang of DBpedia mentors comprises of very experienced developers that are working with us on this project for several years now. Speaking of sustainability, we also have former GSoC students on board, who get the chance to mentor projects building on ideas of past GSoC’s. And while students and mentors start bonding, we are really looking forward to the upcoming months of coding – may it be inspiring, fun and fruitful.  

 

DBpedia @ Coding da Vinci 2018

As already mentioned in the previous newsletter, DBpedia is part of the CodingDaVinciOst 2018. Founded in Berlin in 2014, Coding da Vinci is a platform for cultural heritage institutions and the hacker, developer, designer, and gamer community to jointly develop new creative applications from cultural open data during a series of hackathon events. In this year’s edition, DBpedia provides its datasets to support more than 30 cultural institutions, enriching their datasets in order participants of the hackathon can make the most out of the data. Among the participating cultural institutions are, for example, the university libraries of Chemnitz, Jena, Halle, Freiberg, Dresden and Leipzig as well as the Sächsisches Staatsarchiv, Museum für Druckkunst Leipzig, Museum für Naturkunde Berlin, Duchess Anna Amalia Library, and the Museum Burg Posterstein.

CodingDaVinciOst 2018, the current edition of the hackathon, hosted a kick-off weekend at the Bibliotheca Albertina, the University Library in Leipzig. During the event, DBpedia offered a hands-on workshop for newbies and interested hackathon participants who wanted to learn about how to enrich their project ideas with DBpedia or how to solve potential problems in their projects with DBpedia.

We are now looking forward to the upcoming weeks of coding and hacking and can’t wait to see the results on June 18th, when the final projects will be presented and awarded. We wish all the coders and hackers a pleasant and happy hacking time. Check our DBpedia Twitter for updates and latest news.  

If you have any questions, like to support us in any way or if you like to learn more about DBpedia, just drop us a line via dbpedia@infai.org

Yours,
DBpedia Association