Category Archives: Support

DBpedia @ Google Summer of Code 2016

DBpedia participated for a fourth time in the Google summer of code program. This was a quite competitive year (like every year) where more than fourty students applied for a DBpedia project. In the end,  8 great students from all around the world  were selected and will work on their projects during the summer. Here’s a detailed list of the projects:

A Hybrid Classifier/Rule-based Event Extractor for DBpedia Proposal by Vincent Bohlen

In modern times the amount of information published on the internet is growing to an immeasurable extent. Humans are no longer able to gather all the available information by hand but are more and more dependent on machines collecting relevant information automatically. This is why automatic information extraction and in especially automatic event extraction is important. In this project I will implement a system for event extraction using Classification and Rule-based Event Extraction. The underlying data for both approaches will be identical. I will gather wikipedia articles and perform a variety of NLP tasks on the extracted texts. First I will annotate the named entities in the text using named entity recognition performed by DBpedia Spotlight. Additionally I will annotate the text with Frame Semantics using FrameNet frames. I will then use the collected information, i.e. frames, entities, entity types, with the aforementioned two different methods to decide if the collection is an event or not. Mentor: Marco Fossati (SpazioDati)

Automatic mappings extraction by Aditya Nambiar

DBpedia currently maintains a mapping between Wikipedia info-box properties to the DBpedia ontology, since several similar templates exist to describe the same type of info-boxes. The aim of the project is to enrich the existing mapping and possibly correct the incorrect mapping’s using Wikidata.

Several wikipedia pages use Wikidata values directly in their infoboxes. Hence by using the mapping between Wikidata properties and DBpedia Ontology classes along with the info-box data across several such wiki pages we can collect several such mappings. The first phase of the project revolves around using various such wikipedia templates , finding their usages across the wikipedia pages and extracting as many mappings as possible.

In the second half of the project we use machine learning techniques to take care of any accidental / outlier usage of Wikidata mappings in wikipedia. At the end of the project we will be able to obtain a correct set of mapping which we can use to enrich the existing mapping. Mentor: Markus Freudenberg (AKSW/KILT)

Combining DBpedia and Topic Modelling by wojtuch

DBpedia, a crowd- and open-sourced community project extracting the content from Wikipedia, stores this information in a huge RDF graph. DBpedia Spotlight is a tool which delivers the DBpedia resources that are being mentioned in the document.

Using DBpedia Spotlight to extract Named Entities from Wikipedia articles and then applying a topic modelling algorithm (e.g. LDA) with URIs of DBpedia resources as features would result in a model, which is capable of describing the documents with the proportions of the topics covering them. But because the topics are also represented by DBpedia URIs, this approach could result in a novel RDF hierarchy and ontology with insights for further analysis of the emerged subgraphs.

The direct implication and first application scenario for this project would be utilizing the inference engine in DBpedia Spotlight, as an additional step after the document has been annotated and predicting its topic coverage. Mentor: Alexandru Todor (FU Berlin)

DBpedia Lookup Improvements by Kunal.Jha

DBpedia is one of the most extensive and most widely used knowledge base in over 125 languages. DBpedia Lookup is a tool that allows The DBpedia Lookup is a web service that allows users to obtain various DBpedia URIs for a given label (keywords/anchor texts). The service provides two different types of search APIs, namely, Keyword Search and Prefix Search. The lookup service currently returns the query results in XML (default) and JSON formats and works on English language. It is based on a Lucene Index providing a weighted label lookup, which combines string similarity with a relevance ranking in order to find the most relevant matches for a given label. As a part of the GSOC 2016, I propose to implement improvisations with an intention to make the system more efficient and versatile. Mentor: Axel Ngonga (AKSW)

Inferring infobox template class mappings from Wikipedia + Wikidata by Peng_Xu

This project aims at finding mappings between the classes (eg. dbo:Person, dbo:City) in the DBpedia ontology and infobox templates on pages of Wikipedia resources using machine learning. Mentor: Nilesh Chakraborty (University of Bonn)

Integrating RML in the Dbpedia extraction framework by wmaroy

This project is about integrating RML in the Dbpedia extraction framework. Dbpedia is derived from Wikipedia infoboxes using the extraction framework and mappings defined using the wikitext syntax. A next step would be replacing the wikitext defined mappings with RML. To accomplish this, adjustments will have to be made to the extraction framework. Mentor: Dimitris Kontokostas (AKSW/KILT)

The List Extractor by FedBai

The project focuses on the extraction of relevant but hidden data which lies inside lists in Wikipedia pages. The information is unstructured and thus cannot be easily used to form semantic statements and be integrated in the DBpedia ontology. Hence, the main task consists in creating a tool which can take one or more Wikipedia pages with lists within as an input and then construct appropriate mappings to be inserted in a DBpedia dataset. The extractor must prove to work well on a given domain and to have the ability to be expanded to reach generalization. Mentor: Marco Fossati (SpazioDati)

The Table Extractor by s.papalini

Wikipedia is full of data hidden in tables. The aim of this project is to exploring the possibilities of take advantage of all the data represented with the appearance of tables in Wiki pages, in order to populate the different versions of DBpedia with new data of interest. The Table Extractor has to be the engine of this data “revolution”: it would achieve the final purpose of extract the semi structured data from all those tables now scattered in most of the Wiki pages. Mentor: Marco Fossati (SpazioDati)

At the begining of September 2016 you will receive news about successfull Google Summer of Code 2016 student projects. Stay tuned and follow us on  facebook, twitter or visit our website for the latest news.

Your DBpedia Association

6th DBpedia Community Meeting in The Hague 2016

3 more days to go…

until we finally meet again for our next DBpedia Community Meeting, which is hosted by the National Library of the Netherlands in the Hague on February 12th. One day before we will have a welcome reception (5-8pm) with snacks and drinks at TNO – New Babylon.

Only 15 seats are left for the next DBpedia Community Meeting, so come and get your ticket to be part of this event.

The 6th edition of this event covers a discussion about the Dutch DBpedia becoming the first chapter with institutional support of the new DBpedia as well as a session on the DBpedia ontology by members of the newly found DBpedia working group. On top we will have a DBpedia showcase session on DBpedia+ Data Stack 2015-10 – Release, Quality control in DBpedia as well as presentations about the LIDER and Goose project. And as usual,  our event features a dev and tutorial session to learn about DBpedia.

Experts in the field of semantic technologies from Elsevier and the dutch Land Registry and Mapping Agency, as well as the Europeana project and the DEN foundation will speak about topics such as Digital Heritage in the Netherlands and Knowledge Graph Construction and the Role of DBpedia.

Attending the DBpedia Community meeting is free, but you need to register here. Optionally, in case you like to support DBpedia with a little more than your presence during the event, you can choose a DBpedia support ticket. Have a look here: .

We would like to thank the following organizations for sponsoring and supporting our endeavour.

Check our website for further updates and like us on Facebook.

GSoC 2015 is gone, long live GSoC 2016

The submission deadline for mentoring organizations to submit their application for the 2016 Google Summer of Code is approaching quickly. As DBpedia is again planning to be a vital part of the Mentoring Summit, we like to take that opportunity to  give you a little recap of the projects mentored by DBpedia members during the past GSoC, in November 2015. 

Dimitris Kontokostas, Marco Fossati, Thiago Galery, Joachim Daiber and Reuben Verborgh, members of the Dbpedia community, mentored 8 great students from around the world. Following are some of the projects they completed.

Fact Extraction from Wikipedia Text by Emilio Dorigatti

DBpedia is pretty much mature when dealing with Wikipedia semi-structured content like infoboxes, links and categories. However, unstructured content (typically text) plays the most crucial role, due to the amount of knowledge it can deliver, and few efforts have been carried out to extract structured data out of it. Marco and Emilio built a fact extractor, which understands the semantics of a sentence thanks to Natural Language Processing (NLP) techniques. If you feel playful, you can download the produced datasetsFor more details, check out this blog postP.S.: the project has been cited by Python Weekly and Python TrendingMentor: Marco Fossati (SpazioDati)

Better context vectors for disambiguation by Philipp Dowling

Better Context Vectors  aimed to improve the representation of context used by DBpedia Spotlight by incorporating novel methods from distributional semantics. We investigated the benefits of replacing a word-count based method for one that uses a model based on word2vec. Our student, Philipp Dowling, implemented the model reader based on a preprocessed version of Wikipedia (leading to a few commits to the awesome library gensim) and the integration with the main DBpedia Spotlight pipeline. Additionally, we integrated a method for estimating weights for the different model components that contribute to disambiguating entities. Mentors: Thiago Galery (Analytyca), Joachim Daiber (Amsterdam Univ.), David Przybilla (Idio)


Wikipedia Stats Extractor by Naveen Madhire

Wikipedia Stats Extractor aimed to create a reusable tool to extract raw statistics for Name Entity Linking out of a Wikipedia dump. Naveen built the project on top of Apache Spark and Json-wikipedia which makes the code more maintainable and faster than its previous alternative (pignlproc). Furthermore Wikipedia Stats Extractor provides an interface which makes easier the task of processing Wikipedia dumps for  purposes other than Entity Linking. Extra changes were made in the way surface forms stats are extracted  and lots of noise was removed, both of which should in principle help Entity Linking.
Special regards to Diego Ceccarelli who gave us great insight on how Json-wikipedia worked. Mentors: Thiago Galery (Analytyca), Joachim Daiber (Amsterdam Univ.), David Przybilla (Idio)


DBpedia Live extensions by Andre Pereira

DBpedia Live provides near real-time knowledge extraction from Wikipedia. As wikipedia scales we needed to move our caching infrastructure from MySQL to MongoDB. This was the first task of Andre’s project. The second task was the implementation of a UI displaying the current status of DBpedia Live along with some admin utils. Mentors: Dimitris Kontokostas (AKSW/KILT), Magnus Knuth (HPI)


Adding live-ness to the Triple Pattern Fragments server by Pablo Estrada

DBpedia currently has a highly available Triple Pattern Fragments interface that offloads part of the query processing from the server into the clients. For this GSoC, Pablo developed a new feature for this server so it automatically keeps itself up to date with new data coming from DBpedia Live. We do this by periodically checking for updates, and adding them to an auxiliary database. Pablo developed smart update, and smart querying algorithms to manage and serve the live data efficiently. We are excited to let the project out in the wild, and see how it performs in real-life use cases. Mentors: Ruben Verborgh (Ghent Univ. – iMinds) and Dimitris Kontokostas (AKSW/KILT)

Registration for mentors @ GSoC 2016 is starting next month and DBpedia would of course try to participate again. If you want to become a mentor or just have a cool idea that seems suitable, don’t hesitate to ping us via the DBpedia discussion or developer mailing lists.

Stay tuned!

Your DBpedia Association

New DBpedia Overview Article

We are pleased to announce that a new overview article for DBpedia is available.

The article covers several aspects of the DBpedia community project:

  • The DBpedia extraction framework.
  • The mappings wiki as the central structure for maintaining the community-curated DBpedia ontology.
  • Statistics on the multilingual support in DBpedia.
  • DBpedia live synchronisation with Wikipedia.
  • Statistics on the interlinking of DBpedia with other parts of the LOD cloud (incoming and outgoing links).
  • Several usage statistics: What kind of queries are asked against DBpedia and how did that change over the past years? How much traffic do the official static and live endpoint as well as the download server have? What are the most popular DBpedia datasets?
  • A description of use cases and applications of DBpedia in several areas (drop me mail if important applications are missing).
  • The relation of DBpedia to the YAGO, Freebase and WikiData projects.
  • Future challenges for the DBpedia project.

After our ISWC 2009 paper on DBpedia, this is the (long overdue) new reference article for DBpedia, which should provide a good introduction to the project. We submitted the article as a system report to the Semantic Web journal.

Download article as PDF.