Category Archives: Dataset releases

New DBpedia Release – 2016-10

We are happy to announce the new DBpedia Release.

This release is based on updated Wikipedia dumps dating from October 2016.

You can download the new DBpedia datasets in N3 / TURTLE serialisation from http://wiki.dbpedia.org/downloads-2016-10 or directly here http://downloads.dbpedia.org/2016-10/.

This release took us longer than expected. We had to deal with multiple issues and included new data. Most notable is the addition of the NIF annotation datasets for each language, recording the whole wiki text, its basic structure (sections, titles, paragraphs, etc.) and the included text links. We hope that researchers and developers, working on NLP-related tasks, will find this addition most rewarding. The DBpedia Open Text Extraction Challenge (next deadline Mon 17 July for SEMANTiCS 2017) was introduced to instigate new fact extraction based on these datasets.

We want to thank anyone who has contributed to this release, by adding mappings, new datasets, extractors or issue reports, helping us to increase coverage and correctness of the released data.  The European Commission and the ALIGNED H2020 project for funding and general support.

You want to read more about the  New Release? Click below for further  details.

 Statistics

Altogether the DBpedia 2016-10 release consists of 13 billion (2016-04: 11.5 billion) pieces of information (RDF triples) out of which 1.7 billion (2016-04: 1.6 billion) were extracted from the English edition of Wikipedia, 6.6 billion (2016-04: 6 billion) were extracted from other language editions and 4.8 billion (2016-04: 4 billion) from Wikipedia Commons and Wikidata.

In addition, adding the large NIF datasets for each language edition (see details below) increased the number of triples further by over 9 billion, bringing the overall count up to 23 billion triples.

Changes

  • The NLP Interchange Format (NIF) aims to achieve interoperability between Natural Language Processing (NLP) tools, language resources and annotations. To extend the versatility of DBpedia, furthering many NLP-related tasks, we decided to extract the complete human- readable text of any Wikipedia page (‘nif_context’), annotated with NIF tags. For this first iteration, we restricted the extent of the annotations to the structural text elements directly inferable by the HTML (‘nif_page_structure’). In addition, all contained text links are recorded in a dedicated dataset (‘nif_text_links’).
    The DBpedia Association started the Open Extraction Challenge on the basis of these datasets. We aim to spur knowledge extraction from Wikipedia article texts in order to dramatically broaden and deepen the amount of structured DBpedia/Wikipedia data and provide a platform for benchmarking various extraction tools with this effort.
    If you want to participate with your own NLP extraction engine, the next deadline for the SEMANTICS 2017 is July 17th.
    We included an example of these structures in section five of the download-page of this release.
  • A considerable amount of work has been done to streamline the extraction process of DBpedia, converting many of the extraction tasks into an ETL setting (using SPARK). We are working in concert with the Semantic Web Company to further enhance these results by introducing a workflow management environment to increase the frequency of our releases.

In case you missed it, what we changed in the previous release (2016-04)

  • We added a new extractor for citation data that provides two files:
    • citation links: linking resources to citations
    • citation data: trying to get additional data from citations. This is a quite interesting dataset but we need help to clean it up
  • In addition to normalised datasets to English DBpedia (en-uris), we additionally provide normalised datasets based on the DBpedia Wikidata (DBw) datasets (wkd-uris). These sorted datasets will be the foundation for the upcoming fusion process with wikidata. The DBw-based uris will be the only ones provided from the following releases on.
  • We now filter out triples from the Raw Infobox Extractor that are already mapped. E.g. no more “<x> dbo:birthPlace <z>” and “<x> dbp:birthPlace|dbp:placeOfBirth|… <z>” in the same resource. These triples are now moved to the “infobox-properties-mapped” datasets and not loaded on the main endpoint. See issue 22 for more details.
  • Major improvements in our citation extraction. See here for more details.
  • We incorporated the statistical distribution approach of Heiko Paulheim in creating type statements automatically and providing them as additional datasets (instance_types_sdtyped_dbo).

 

Upcoming Changes

  • DBpedia Fusion: We finally started working again on fusing DBpedia language editions. Johannes Frey is taking the lead in this project. The next release will feature intermediate results.
  • Id Management: Closely pertaining to the DBpedia Fusion project is our effort to introduce our own Id/IRI management, to become independent of Wikimedia created IRIs. This will not entail changing out domain or entity naming regime, but providing the possibility of adding entities of any source or scope.
  • RML Integration: Wouter Maroy did already provide the necessary groundwork for switching the mappings wiki to an RML based approach on Github. Wouter started working exclusively on implementing the Git based wiki and the conversion of existing mappings last week. We are looking forward to the consequent results of this process.
  • Further development of SPARK Integration and workflow-based DBpedia extraction, to increase the release frequency.

 

New Datasets

  • New languages extracted from Wikipedia:

South Azerbaijani (azb), Upper Sorbian (hsb), Limburgan (li), Minangkabau (min), Western Mari (mrj), Oriya (or), Ossetian (os)

  • SDTypes: We extended the coverage of the automatically created type statements (instance_types_sdtyped_dbo) to English, German and Dutch.
  • Extensions: In the extension folder (2016-10/ext) we provide two new datasets (both are to be considered in an experimental state:
    • DBpedia World Facts: This dataset is authored by the DBpedia Association itself. It lists all countries, all currencies in use and (most) languages spoken in the world as well as how these concepts relate to each other (spoken in, primary language etc.) and useful properties like iso codes (ontology diagram). This Dataset extends the very useful LEXVO dataset with facts from DBpedia and the CIA Factbook. Please report any error or suggestions in regard to this dataset to Markus.
    • JRC-Alternative-Names: This resource is a link based complementary repository of spelling variants for person and organisation names. The data is multilingual and contains up to hundreds of variations entity. It was extracted from the analysis of news reports by the Europe Media Monitor (EMM) as available on JRC-Names.

 Community

The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 2016-04 ontology encompasses:

  • 760 classes
  • 1,105 object properties
  • 1,622 datatype properties
  • 132 specialised datatype properties
  • 414 owl:equivalentClass and 220 owl:equivalentProperty mappings external vocabularies

The editor community of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 2016-10 extraction, we used a total of 5887 template mappings (DBpedia 2015-10: 5800 mappings). The top language, gauged by the number of mappings, is Dutch (648 mappings), followed by the English community (606 mappings).

Read more

 Credits to

  • Markus Freudenberg (University of Leipzig / DBpedia Association) for taking over the whole release process and creating the revamped download & statistics pages.
  • Dimitris Kontokostas (University of Leipzig / DBpedia Association) for conveying his considerable knowledge of the extraction and release process.
  • All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
  • The whole DBpedia Internationalization Committee for pushing the DBpedia internationalization forward.
  • Václav Zeman and the whole LHD team (University of Prague) for their contribution of additional DBpedia types
  • Alan Meehan (TCD) for performing a big external link cleanup
  • Aldo Gangemi (LIPN University, France & ISTC-CNR, Italy) for providing the links from DOLCE to DBpedia ontology.
  • SpringerNature for offering a co-internship to a bright student and developing a closer relation to DBpedia on multiple issues, as well as Links to their SciGraph subjects.
  • Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software) for loading the new data set into the Virtuoso instance that provides 5-Star Linked Open Data publication and SPARQL Query Services.
  • OpenLink Software (http://www.openlinksw.com/) collectively for providing the SPARQL Query Services and Linked Open Data publishing infrastructure for DBpedia in addition to their continuous infrastructure support.
  • Ruben Verborgh from Ghent University – imec for publishing the dataset as Triple Pattern Fragments, and imec for sponsoring DBpedia’s Triple Pattern Fragments server.
  • Ali Ismayilov (University of Bonn) for extending and cleaning of the DBpedia Wikidata dataset.
  • All the GSoC students and mentors which have directly or indirectly worked on the DBpedia release
  • Special thanks to members of the DBpedia Association, the AKSW and the Department for Business Information Systems of the University of Leipzig.

The work on the DBpedia 2016-10 release was financially supported by the European Commission through the project ALIGNED – quality-centric, software and data engineering.

More information about DBpedia is found at http://dbpedia.org as well as in the new overview article about the project available at http://wiki.dbpedia.org/Publications.

Have fun with the new DBpedia 2016-10 release!

YEAH! We did it again ;) – New 2016-04 DBpedia release

Hereby we announce the release of DBpedia 2016-04. The new release is based on updated Wikipedia dumps dating from March/April 2016 featuring a significantly expanded base of information as well as richer and (hopefully) cleaner data based on the DBpedia ontology.

You can download the new DBpedia datasets in a variety of RDF-document formats from: http://wiki.dbpedia.org/downloads-2016-04 or directly here: http://downloads.dbpedia.org/2016-04/

Support DBpedia

During the latest DBpedia meeting in Leipzig we discussed about ways to support DBpedia and what benefits this support would bring. For the next two months, we are aiming to raise money to support the hosting of the main services and the next DBpedia release (especially to shorten release intervals). On top of that we need to buy a new server to host DBpedia Spotlight that was so generously hosted so far by third parties. If you use DBpedia and want us to keep going forward, we kindly invite you to donate here or become a member of the DBpedia association.

Statistics

The English version of the DBpedia knowledge base currently describes 6.0M entities of which 4.6M have abstracts, 1.53M have geo coordinates and 1.6M depictions. In total, 5.2M resources are classified in a consistent ontology, consisting of 1.5M persons, 810K places (including 505K populated places), 490K works (including 135K music albums, 106K films and 20K video games), 275K organizations (including 67K companies and 53K educational institutions), 301K species and 5K diseases. The total number of resources in English DBpedia is 16.9M that, besides the 6.0M resources, includes 1.7M skos concepts (categories), 7.3M redirect pages, 260K disambiguation pages and 1.7M intermediate nodes.

Altogether the DBpedia 2016-04 release consists of 9.5 billion (2015-10: 8.8 billion) pieces of information (RDF triples) out of which 1.3 billion (2015-10: 1.1 billion) were extracted from the English edition of Wikipedia, 5.0 billion (2015-04: 4.4 billion) were extracted from other language editions and 3.2 billion (2015-10: 3.2 billion) from  DBpedia Commons and Wikidata. In general, we observed a growth in mapping-based statements of about 2%.

Thorough statistics can be found on the DBpedia website and general information on the DBpedia datasets here.

Community

The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 2016-04 ontology encompasses:

  • 754 classes (DBpedia 2015-10: 739)
  • 1,103 object properties (DBpedia 2015-10: 1,099)
  • 1,608 datatype properties (DBpedia 2015-10: 1,596)
  • 132 specialized datatype properties (DBpedia 2015-10: 132)
  • 410 owl:equivalentClass and 221 owl:equivalentProperty mappings external vocabularies (DBpedia 2015-04: 407 – 221)

The editor community of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 2016-04 extraction, we used a total of 5800 template mappings (DBpedia 2015-10: 5553 mappings). For the second time the top language, gauged by the number of mappings, is Dutch (646 mappings), followed by the English community (604 mappings).

(Breaking) Changes

  • In addition to normalized datasets to English DBpedia (en-uris) we additionally provide normalized datasets based on the DBpedia Wikidata (DBw) datasets (wkd-uris). These sorted datasets will be the foundation for the upcoming fusion process with wikidata. The DBw-based uris will be the only ones provided from the following releases on.
  • We now filter out triples from the Raw Infobox Extractor that are already mapped. E.g. no more “<x> dbo:birthPlace <z>” and “<x> dbp:birthPlace|dbp:placeOfBirth|… <z>” in the same resource. These triples are now moved to the “infobox-properties-mapped” datasets and not loaded on the main endpoint. See issue 22 for more details.
  • Major improvements in our citation extraction. See here for more details.
  • We incorporated the statistical distribution approach of Heiko Paulheim in creating type statements automatically and providing them as an additional datasets (instance_types_sdtyped_dbo).

In case you missed it, what we changed in the previous release (2015-10):

  • English DBpedia switched to IRIs. This can be a breaking change to some applications that need to change their stored DBpedia resource URIs / links. We provide the “uri-same-as-iri” dataset for English to ease the transition.
  • The instance-types dataset is now split into two files: instance-types (containing only direct types) and instance-types-transitive containing the transitive types of a resource based on the DBpedia ontology
  • The mappingbased-properties file is now split into three (3) files:
    • “geo-coordinates-mappingbased” that contains the coordinated originating from the mappings wiki. the “geo-coordinates” continues to provide the coordinates originating from the GeoExtractor
    • “mappingbased-literals” that contains mapping based fact with literal values
    • “mappingbased-objects” that contains mapping based fact with object values
    • the “mappingbased-objects-disjoint-[domain|range]” are facts that are filtered out from the “mappingbased-objects” datasets as errors but are still provided
  • We added a new extractor for citation data that provides two files:
    • citation links: linking resources to citations
    • citation data: trying to get additional data from citations. This is a quite interesting dataset but we need help to clean it up
  • All datasets are available in .ttl and .tql serialization (nt, nq dataset were neglected for reasons of redundancy and server capacity).

Upcoming Changes

  • Dataset normalization: We are going to normalize datasets based on wikidata uris and no longer on the English language edition, as a prerequisite to finally start the fusion process with wikidata.
  • RML Integration: Wouter Maroy did already provide the necessary groundwork for switching the mappings wiki to a RML based approach on Github. We are not there yet but this is at the top of our list of changes.
  • Starting with the next release we are adding datasets with NIF annotations of the abstracts (as we already provided those for the 2015-04 release). We will eventually extend the NIF annotation dataset to cover the whole Wikipedia article of a resource.

New Datasets

  • SDTypes: We extended the coverage of the automatically created type statements (instance_types_sdtyped_dbo) to English, German and Dutch (see above).
  • Extensions: In the extension folder (2016-04/ext) we provide two new datasets, both are to be considered in an experimental state:
    • DBpedia World Facts: This dataset is authored by the DBpedia association itself. It lists all countries, all currencies in use and (most) languages spoken in the world as well as how these concepts relate to each other (spoken in, primary language etc.) and useful properties like iso codes (ontology diagram). This Dataset extends the very useful LEXVO dataset with facts from DBpedia and the CIA Factbook. Please report any error or suggestions in regard to this dataset to Markus.
    • Lector Facts: This experimental dataset was provided by Matteo Cannaviccio and demonstrates his approach to generating facts by using common sequences of words (i.e. phrases) that are frequently used to describe instances of binary relations in a text. We are looking into using this approach as a regular extraction step. It would be helpful to get some feedback from you.

Credits

Lots of thanks to

  • Markus Freudenberg (University of Leipzig / DBpedia Association) for taking over the whole release process and creating the revamped download & statistics pages.
  • Dimitris Kontokostas (University of Leipzig / DBpedia Association) for conveying his considerable knowledge of the extraction and release process.
  • All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
  • The whole DBpedia Internationalization Committee for pushing the DBpedia internationalization forward.
  • Heiko Paulheim (University of Mannheim) for providing the necessary code for his algorithm to generate additional type statements for formerly untyped resources and identify and removed wrong statements. Which is now part of the DIEF.
  • Václav Zeman, Thomas Klieger and the whole LHD team (University of Prague) for their contribution of additional DBpedia types
  • Marco Fossati (FBK) for contributing the DBTax types
  • Alan Meehan (TCD) for performing a big external link cleanup
  • Aldo Gangemi (LIPN University, France & ISTC-CNR, Italy) for providing the links from DOLCE to DBpedia ontology.
  • Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software) for loading the new data set into the Virtuoso instance that provides 5-Star Linked Open Data publication and SPARQL Query Services.
  • OpenLink Software (http://www.openlinksw.com/) collectively for providing the SPARQL Query Services and Linked Open Data publishing  infrastructure for DBpedia in addition to their continuous infrastructure support.
  • Ruben Verborgh from Ghent University – iMinds for publishing the dataset as Triple Pattern Fragments, and iMinds for sponsoring DBpedia’s Triple Pattern Fragments server.
  • Ali Ismayilov (University of Bonn) for extending the DBpedia Wikidata dataset.
  • Vladimir Alexiev (Ontotext) for leading a successful mapping and ontology clean up effort.
  • All the GSoC students and mentors which directly or indirectly influenced the DBpedia release
  • Special thanks to members of the DBpedia Association, the AKSW and the department for Business Information Systems of the University of Leipzig.

The work on the DBpedia 2016-04 release was financially supported by the European Commission through the project ALIGNED – quality-centric, software and data engineering  (http://aligned-project.eu/). More information about DBpedia is found at http://dbpedia.org as well as in the new overview article about the project available at http://wiki.dbpedia.org/Publications.

Have fun with the new DBpedia 2016-04 release!

For more information about DBpedia, please visit our website or follow us on facebook!
Your DBpedia Association

We proudly present our new 2015-10 DBpedia release, which is abailable now via:  http://dbpedia.org/sparql. Go an check it out!

This DBpedia release is based on updated Wikipedia dumps dating from October 2015 featuring a significantly expanded base of information as well as richer and cleaner data based on the DBpedia ontology.

So, what did we do?

The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 2015-10 ontology encompasses

  • 739 classes (DBpedia 2015-04: 735)
  • 1,099 properties with reference values (a/k/a object properties) (DBpedia 2015-04: 1,098)
  • 1,596 properties with typed literal values (a/k/a datatype properties) (DBpedia 2015-04: 1,583)
  • 132 specialized datatype properties (DBpedia 2015-04: 132)
  • 407 owl:equivalentClass and 222 owl:equivalentProperty mappings external vocabularies (DBpedia 2015-04: 408 and 200, respectively)

The editors community of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 2015-10 extraction, we used a total of 5553 template mappings (DBpedia 2015-04: 4317 mappings). For the first time the top language, gauged by number of mappings, is Dutch (606 mappings), surpassing the English community (600 mappings).

And what are the (breaking) changes ?

  • English DBpedia switched to IRIs from URIs. 
  • The instance-types dataset is now split to two files:
    • “instance-types” contains only direct types.
    • “Instance-types-transitive” contains transitive types.
    • The “mappingbased-properties” file is now split into three (3) files:
      • “geo-coordinates-mappingbased”
      • “mappingbased-literals” contains mapping based statements with literal values.
      • “mappingbased-objects”
  • We added a new extractor for citation data.
  • All datasets are available in .ttl and .tql serialization 
  • We are providing DBpedia as a Docker image.
  • From now on, we provide extensive dataset metadata by adding DataIDs for all extracted languages to the respective language directories.
  • In addition, we revamped the dataset table on the download-page. It’s created dynamically based on the DataID of all languages. Likewise, the tables on the statistics- page are now based on files providing information about all mapping languages.
  • From now on, we also include the original Wikipedia dump files(‘pages_articles.xml.bz2’) alongside the extracted datasets.
  • A complete changelog can always be found in the git log.

And what about the numbers?

Altogether the new DBpedia 2015-10 release consists of 8.8 billion (2015-04: 6.9 billion) pieces of information (RDF triples) out of which 1.1 billion (2015-04: 737 million) were extracted from the English edition of Wikipedia, 4.4 billion (2015-04: 3.8 billion) were extracted from other language editions, and 3.2 billion (2015-04: 2.4 billion) came from  DBpedia Commons and Wikidata. In general we observed a significant growth in raw infobox and mapping-based statements of close to 10%.  Thorough statistics are available via the Statistics page.

And what’s up next?

We will be working to move away from the mappings wiki but we will have at least one more mapping sprint. Moreover, we have some cool ideas for GSOC this year. Additional mentors are more than welcome. 🙂

And who is to blame for the new release?

We want to thank all editors that contributed to the DBpedia ontology mappings via the Mappings Wiki, all the GSoC students and mentors working directly or indirectly on the DBpedia release and the whole DBpedia Internationalization Committee for pushing the DBpedia internationalization forward.

Special thanks go to Markus Freudenberg and Dimitris Kontokostas (University of Leipzig), Volha Bryl (University of Mannheim / Springer), Heiko Paulheim (University of Mannheim), Václav Zeman and the whole LHD team (University of Prague), Marco Fossati (FBK), Alan Meehan (TCD), Aldo Gangemi (LIPN University, France & ISTC-CNR, Italy), Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software), OpenLink Software (http://www.openlinksw.com/), Ruben Verborgh from Ghent University – iMinds, Ali Ismayilov (University of Bonn), Vladimir Alexiev (Ontotext) and members of the DBpedia Association, the AKSW and the department for Business Information Systems of the University of Leipzig for their committment in putting tremendous time and effort to get this done.

The work on the DBpedia 2015-10 release was financially supported by the European Commission through the project ALIGNED – quality-centric, software and data engineering  (http://aligned-project.eu/).

 

Detailed information about the new release are available here. For more information about DBpedia, please visit our website or follow us on Facebook!

Have fun and all the best!

Yours

DBpedia Association

DBpedia Version 2015-04 released

Dear all,

we are happy to announce the release of DBpedia 2015-04 (also known as: 2015 A). The new release is based on updated Wikipedia dumps dating from February/March 2015 and features an enlarged DBpedia ontology with more infobox to ontology mappings, leading to richer and cleaner data.

http://wiki.dbpedia.org/Downloads2015-04

The English version of the DBpedia knowledge base currently describes 5.9M things out of which 4.3M resources have abstracts, 452K geo coordinates and 1.45M depictions. In total, 4 million resources are classified in a consistent ontology and consists of  2,06M persons, 682K places (including 455K populated places), 376K creative works (including 92K music albums, 90K films and 17K video games), 188K organizations (including 51K companies and 33K educational institutions), 278K species and 5K diseases. The total number of resources in English DBpedia is 15.3M that, besides the 5.9M resources, includes 1.2M skos concepts (categories), 6.83M redirect pages, 256K disambiguation pages and 1.13M intermediate nodes.

We provide localized versions of DBpedia in 128 languages. All these versions together describe 38.3 million things, out of which 23.8 million are localized descriptions of things that also exist in the English version of DBpedia. The full DBpedia data set features 38 million labels and abstracts in 128 different languages, 25.2 million links to images and 29.8 million links to external web pages; 80.9 million links to Wikipedia categories, and 41.2 million links to YAGO categories. DBpedia is connected with other Linked Datasets by around 50 million RDF links.

In addition we provide DBpedia datasets for Wikimedia Commons and Wikidata.

Altogether the DBpedia 2015-04 release consists of 6.9 billion pieces of information (RDF triples) out of which 737 million were extracted from the English edition of Wikipedia, 3.76 billion were extracted from other language editions and 2.4 billion from  DBpedia Commons and Wikidata.

Thorough statistics can be found on the DBpedia website and general information on the DBpedia datasets here.

From this release on we will try to provide two releases per year, one in April and the next in October. The 2015-04 release was delayed by 3 months but we will try to keep the schedule and release the 2015-10 at the end of October or early November.

On our plans for the next release is to remove the URI encoding of English DBpedia (dbpedia.org) and switch to IRIs only. This will simplify the release process and will be aligned with all other DBpedia language datasets. We know that this will probably break some links to DBpedia but we feel is the only way to move forward. If you have any reasons against this action, please let us know now.

A complete list of changes in this release can be found on GitHub.

From this release we adjusted the download page folder structure, giving us more flexibility to offer more datasets in the near future

http://downloads.dbpedia.org/2015-04/

Enlarged Ontology

The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 2015 ontology encompasses

  • 735 classes (DBpedia 2014: 685)
  • 1,098 object properties (DBpedia 2014: 1079)
  • 1,583 datatype properties (DBpedia 2014: 1,600)
  • 132 specialized datatype properties (DBpedia 2014: 116)
  • 408 owl:equivalentClass and 200 owl:equivalentProperty mappings external vocabularies

Additional Infobox to Ontology Mappings

The editors community of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. There are six new languages with mappings: Arabic, Bulgarian, Armenian, Romanian, Swedish and Ukrainian.

For the DBpedia 2015 extraction, we used a total of 4317 template mappings (DBpedia 2014: 3814 mappings).

Extended Type System to cover Articles without Infobox

Until the DBpedia 3.8 release, a concept was only assigned a type (like person or place) if the corresponding Wikipedia article contains an infobox indicating this type. Starting from the 3.9 release, we provide type statements for articles without infobox that are inferred based on the link structure within the DBpedia knowledge base using the algorithm described in Paulheim/Bizer 2014. For the new release, an improved version of the algorithm was run to produce type information for 400,000 things that were formerly not typed. A similar algorithm (presented in the same paper) was used to identify and remove potentially wrong statements from the knowledge base.

In addition, this release include four new type datasets, although not included in the online sparql endpoint: 1) LHD datasets for English, German and Dutch and 2) DBTax for English.

Both of these datasets use a typing system beyond the DBpedia ontology and we provide a subset, mapped to the DBpedia ontology (dbo) and a full one with all types (ext).

New and updated RDF Links into External Data Sources

We updated the following RDF link sets pointing at other Linked Data sources: Freebase, Wikidata, Geonames and GADM.

Accessing the DBpedia 2015-04 Release

You can download the new DBpedia datasets in RDF format from http://wiki.dbpedia.org/Downloads or

http://downloads.dbpedia.org/2015-04/

 

Additional external dataset contributions

From the following releases we will provide additional datasets related to DBpedia. For 2015-04 we provide a pagerank dataset for English and German, provided by HPI.

http://downloads.dbpedia.org/2015-04/ext/

 

As usual, the new dataset is also published in 5-Star Linked Open Data form and accessible via the SPARQL Query Service endpoint at http://dbpedia.org/sparql and Triple Pattern Fragments service at http://fragments.dbpedia.org/.

Credits

Lots of thanks to

  • Markus Freudenberg (University of Leipzig) for taking over the whole release process
  • Dimitris Kontokostas for conveying his considerable knowledge of the extraction and release process.
  • Volha Bryl and Daniel Fleischhacker (University of Mannheim) for their work on the previous release and their continuous support in this release.
  • Alexandru Todor (University of Berlin) for contributing time and computing resources for the abstract extraction.
  • All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
  • The whole DBpedia Internationalization Committee for pushing the DBpedia internationalization forward.
  • Heiko Paulheim (University of Mannheim) for re-running his algorithm to generate additional type statements for formerly untyped resources and identify and removed wrong statements.
  • Václav Zeman and the whole LHD team (University of Prague) for their contribution of additional DBpedia types
  • Marco Fossati (FBK) for contributing the DBTax types
  • Petar Ristoski (University of Mannheim) for generating the updated links pointing at the GADM database of Global Administrative Areas. Petar will also generate an updated release of DBpedia as Tables soon.
  • Aldo Gangemi (LIPN University, France & ISTC-CNR, Italy) for providing the links from DOLCE to DBpedia ontology.
  • Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software) for loading the new data set into the Virtuoso instance that provides 5-Star Linked Open Data publication and SPARQL Query Services.
  • OpenLink Software (http://www.openlinksw.com/) altogether for providing the SPARQL Query Services and Linked Open Data publishing  infrastructure for DBpedia in addition to their continuous infrastructure support.
  • Ruben Verborgh from Ghent University – iMinds for publishing the dataset as Triple Pattern Fragments, and iMinds for sponsoring DBpedia’s Triple Pattern Fragments server.
  • Magnus Knuth (HPI) for providing a pagerank dataset for English and German
  • Ali Ismayilov (University of Bonn) for implementing DBpedia Wikidata dataset.
  • Vladimir Alexiev (Ontotext) for leading a successful mapping and ontology clean up effort.
  • Nono314 for contributing a lot of improvements and bug fixes in the extraction framework as well as other community members.
  • All the GSoC students and mentors working directly or indirectly on the DBpedia release

The work on the DBpedia 2015-04 release was financially supported by the European Commission through the project ALIGNED – quality-centric, software and data engineering  (http://aligned-project.eu/).

More information about DBpedia is found at http://dbpedia.org as well as in the new overview article about the project available at http://wiki.dbpedia.org/Publications.

Have fun with the new DBpedia 2015-04 release!

Cheers,

Markus Freudenberg, Dimitris Kontokostas, Sebastian Hellmann

DBpedia Version 2014 released

Hi all,

we are happy to announce the release of DBpedia 2014.

The most important improvements of the new release compared to DBpedia 3.9 are:

1. the new release is based on updated Wikipedia dumps dating from April / May 2014 (the 3.9 release was based on dumps from March / April 2013), leading to an overall increase of the number of things described in the English edition from 4.26 to 4.58 million things.

2. the DBpedia ontology is enlarged and the number of infobox to ontology mappings has risen, leading to richer and cleaner data.

The English version of the DBpedia knowledge base currently describes 4.58 million things, out of which 4.22 million are classified in a consistent ontology (http://wiki.dbpedia.org/Ontology2014), including 1,445,000 persons, 735,000 places (including 478,000 populated places), 411,000 creative works (including 123,000 music albums, 87,000 films and 19,000 video games), 241,000 organizations (including 58,000 companies and 49,000 educational institutions), 251,000 species and 6,000 diseases.

We provide localized versions of DBpedia in 125 languages. All these versions together describe 38.3 million things, out of which 23.8 million are localized descriptions of things that also exist in the English version of DBpedia. The full DBpedia data set features 38 million labels and abstracts in 125 different languages, 25.2 million links to images and 29.8 million links to external web pages; 80.9 million links to Wikipedia categories, and 41.2 million links to YAGO categories. DBpedia is connected with other Linked Datasets by around 50 million RDF links.

Altogether the DBpedia 2014 release consists of 3 billion pieces of information (RDF triples) out of which 580 million were extracted from the English edition of Wikipedia, 2.46 billion were extracted from other language editions.

Detailed statistics about the DBpedia data sets in 28 popular languages are provided at Dataset Statistics page (http://wiki.dbpedia.org/Datasets2014/DatasetStatistics).

The main changes between DBpedia 3.9 and 2014 are described below. For additional, more detailed information please refer to the DBpedia Change Log (http://wiki.dbpedia.org/Changelog).

 1. Enlarged Ontology

The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 2014 ontology encompasses

  • 685  classes (DBpedia 3.9: 529)
  • 1,079 object properties (DBpedia 3.9: 927)
  • 1,600 datatype properties (DBpedia 3.9: 1,290)
  • 116 specialized datatype properties (DBpedia 3.9: 116)
  • 47 owl:equivalentClass and 35 owl:equivalentProperty mappings to http://schema.org

2. Additional Infobox to Ontology Mappings

The editors community of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 2014 extraction, we used 4,339 mappings (DBpedia 3.9: 3,177 mappings), which are distributed as follows over the languages covered in the release.

  • English: 586 mappings
  • Dutch: 469 mappings
  • Serbian: 450 mappings
  • Polish: 383 mappings
  • German: 295 mappings
  • Greek: 281 mappings
  • French: 221 mappings
  • Portuguese: 211 mappings
  • Slovenian: 170 mappings
  • Korean: 148 mappings
  • Spanish: 137 mappings
  • Italian: 125 mappings
  • Belarusian: 125 mappings
  • Hungarian: 111 mappings
  • Turkish: 91 mappings
  • Japanese: 81 mappings
  • Czech: 66 mappings
  • Bulgarian: 61 mappings
  • Indonesian: 59 mappings
  • Catalan: 52 mappings
  • Arabic: 52 mappings
  • Russian: 48 mappings
  • Basque: 37 mappings
  • Croatian: 36 mappings
  • Irish: 17 mappings
  • Wiki-Commons: 12 mappings
  • Welsh: 7 mappings
  • Bengali: 6 mappings
  • Slovak: 2 Mappings

3. Extended Type System to cover Articles without Infobox

 Until the DBpedia 3.8 release, a concept was only assigned a type (like person or place) if the corresponding Wikipedia article contains an infobox indicating this type. Starting from the 3.9 release, we provide type statements for articles without infobox that are inferred based on the link structure within the DBpedia knowledge base using the algorithm described in Paulheim/Bizer 2014 (http://www.heikopaulheim.com/documents/ijswis_2014.pdf). For the new release, an improved version of the algorithm was run to produce type information for 400,000 things that were formerly not typed. A similar algorithm (presented in the same paper) was used to identify and remove potentially wrong statements from the knowledge base.

 4. New and updated RDF Links into External Data Sources

 We updated the following RDF link sets pointing at other Linked Data sources: Freebase, Wikidata, Geonames and GADM. For an overview about all data sets that are interlinked from DBpedia please refer to http://wiki.dbpedia.org/Interlinking.

Accessing the DBpedia 2014 Release 

 You can download the new DBpedia datasets in RDF format from http://wiki.dbpedia.org/Downloads.
In addition, we provide 
some of the core DBpedia data also in tabular form (CSV and JSON formats) at http://wiki.dbpedia.org/DBpediaAsTables.

 As usual, the new dataset is also available as Linked Data and via the DBpedia SPARQL endpoint at http://dbpedia.org/sparql.

Credits

 Lots of thanks to

  1. Daniel Fleischhacker (University of Mannheim) and Volha Bryl (University of Mannheim) for improving the DBpedia extraction framework, for extracting the DBpedia 2014 data sets for all 125 languages, for generating the updated RDF links to external data sets, and for generating the statistics about the new release.
  2. All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
  3.  The whole DBpedia Internationalization Committee for pushing the DBpedia internationalization forward.
  4. Dimitris Kontokostas (University of Leipzig) for improving the DBpedia extraction framework and loading the new release onto the DBpedia download server in Leipzig.
  5. Heiko Paulheim (University of Mannheim) for re-running his algorithm to generate additional type statements for formerly untyped resources and identify and removed wrong statements.
  6. Petar Ristoski (University of Mannheim) for generating the updated links pointing at the GADM database of Global Administrative Areas. Petar will also generate an updated release of DBpedia as Tables soon.
  7. Aldo Gangemi (LIPN University, France & ISTC-CNR, Italy) for providing the links from DOLCE to DBpedia ontology.
  8.  Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software) for loading the new data set into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint.
  9.  OpenLink Software (http://www.openlinksw.com/) altogether for providing the server infrastructure for DBpedia.
  10. Michael Moore (University of Waterloo, as an intern at the University of Mannheim) for implementing the anchor text extractor and and contribution to the statistics scripts.
  11. Ali Ismayilov (University of Bonn) for implementing Wikidata extraction, on which the interlanguage link generation was based.
  12. Gaurav Vaidya (University of Colorado Boulder) for implementing and running Wikimedia Commons extraction.
  13. Andrea Di Menna, Jona Christopher Sahnwaldt, Julien Cojan, Julien Plu, Nilesh Chakraborty and others who contributed improvements to the DBpedia extraction framework via the source code repository on GitHub.
  14.  All GSoC mentors and students for working directly or indirectly on this release: https://github.com/dbpedia/extraction-framework/graphs/contributors

 The work on the DBpedia 2014 release was financially supported by the European Commission through the project LOD2 – Creating Knowledge out of Linked Data (http://lod2.eu/).

More information about DBpedia is found at http://dbpedia.org/About as well as in the new overview article about the project available at  http://wiki.dbpedia.org/Publications.

Have fun with the new DBpedia 2014 release!

Cheers,

Daniel Fleischhacker, Volha Bryl, and Christian Bizer

 

 

Making sense out of the Wikipedia categories (GSoC2013)

(Part of our DBpedia+spotlight @ GSoC mini blog series)

Mentor: Marco Fossati @hjfocs <fossati[at]spaziodati.eu>
Student: Kasun Perera <kkasunperera[at]gmail.com>

The latest version of the DBpedia ontology has 529 classes. It is not well balanced and shows a lack of coverage in terms of encyclopedic knowledge representation.

Furthermore, the current typing approach involves a costly manual mapping effort and heavily depends on the presence of infoboxes in Wikipedia articles.

Hence, a large number of DBpedia instances is either un-typed, due to a missing mapping or a missing infobox, or has a too generic or too specialized type, due to the nature of the ontology.

The goal of this project is to identify a set of senseful Wikipedia categories that can be used to extend the coverage of DBpedia instances.

How we used the Wikipedia category system

Wikipedia categories are organized in some kind of really messy hierarchy, which is of little use from an ontological point of view.

We investigated how to process this chaotic world.

Here’s what we have done

We have identified a set of meaningful categories by combining the following approaches:

  1. Algorithmic, programmatically traversing the whole Wikipedia category system.

Wow! This was really the hardest part. Kasun made a great job! Special thanks to the category guru Christian Consonni for shedding light in the darkness of such a weird world.

  1. Linguistic, identifying conceptual categories with NLP techniques.

We got inspired by the YAGO guys.

  1. Multilingual, leveraging interlanguage links.

Kudos to Aleksander Pohl for the idea.

  1. Post-mortem, cleaning out stuff that was still not relevant

No resurrection without Freebase!

Outcomes

We found out a total amount of 3751 candidates that can be used to type the instances.

We produced a dataset in the following format:

<Wikipedia_article_page> rdf:type <article_category>

You can access the full dump here. This has not been validated by humans yet.

If you feel like having a look at it, please tell us what do you think about.

Take a look at the Kasun’s progress page for more details.

DBpedia as Tables released

As some of the potential users of DBpedia might not be familiar with the RDF data model and the SPARQL query language, we provide some of the core DBpedia 3.9 data also in tabular form as Comma-Seperated-Values (CSV) files which can easily be processed using standard tools such as spreadsheet applications, relational databases or data mining tools.

For each class in the DBpedia ontology (such as Person, Radio Station, Ice Hockey Player, or Band) we provide a single CSV file which contains all instances of this class. Each instance is described by its URI, an English label and a short abstract, the mapping-based infobox data describing the instance (extracted from the English edition of Wikipedia), and geo-coordinates (if applicable).

Altogether we provide 530 CVS files in the form of a single ZIP file (size 3 GB compressed and 73.4 GB uncompressed).

More information about the file format as well as the download link is found at DBpedia as Tables.

DBpedia 3.9 released, including wider infobox coverage, additional type statements, and new YAGO and Wikidata links

Hi all,

we are happy to announce the release of DBpedia 3.9.

The most important improvements of the new release compared to DBpedia 3.8 are:

1. the new release is based on updated Wikipedia dumps dating from March / April 2013 (the 3.8 release was based on dumps from June 2012), leading to an overall increase in the number of concepts in the English edition from 3.7 to 4.0 million things.

2. the DBpedia ontology is enlarged and the number of infobox to ontology mappings has risen, leading to richer and cleaner concept descriptions.

3. we extended the DBpedia type system to also cover Wikipedia articles that do not contain an infobox.

4. we provide links pointing from DBpedia concepts to Wikidata concepts and updated the links pointing at YAGO concepts and classes, making it easier to integrate knowledge from these sources.

The English version of the DBpedia knowledge base currently describes 4.0 million things, out of which 3.22 million are classified in a consistent Ontology, including 832,000 persons, 639,000 places (including 427,000 populated places), 372,000 creative works (including 116,000 music albums, 78,000 films and 18,500 video games), 209,000 organizations (including 49,000 companies and 45,000 educational institutions), 226,000 species and 5,600 diseases.

We provide localized versions of DBpedia in 119 languages. All these versions together describe 24.9 million things, out of which 16.8 million overlap (are interlinked) with the concepts from the English DBpedia. The full DBpedia data set features labels and abstracts for 12.6 million unique things in 119 different languages; 24.6 million links to images and 27.6 million links to external web pages; 45.0 million external links into other RDF datasets, 67.0 million links to Wikipedia categories, and 41.2 million YAGO categories.

Altogether the DBpedia 3.9 release consists of 2.46 billion pieces of information (RDF triples) out of which 470 million were extracted from the English edition of Wikipedia, 1.98 billion were extracted from other language editions, and about 45 million are links to external data sets.

Detailed statistics about the DBpedia data sets in 24 popular languages are provided at Dataset Statistics.

The main changes between DBpedia 3.8 and 3.9 are described below. For additional, more detailed information please refer to the Change Log.

1. Enlarged Ontology

The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 3.9 ontology encompasses

  • 529 classes (DBpedia 3.8: 359)
  • 927 object properties (DBpedia 3.8: 800)
  • 1290 datatype properties (DBpedia 3.8: 859)
  • 116 specialized datatype properties (DBpedia 3.8: 116)
  • 46 owl:equivalentClass and 31 owl:equivalentProperty mappings to http://schema.org

2. Additional Infobox to Ontology Mappings

The editors of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 3.9 extraction, we used 3177 mappings (DBpedia 3.8: 2347 mappings), that are distributed as follows over the languages covered in the release.

3. Extended Type System to cover Articles without Infobox

Until the DBpedia 3.8 release, a concept was only assigned a type (like person or place) if the corresponding Wikipedia article contains an infobox indicating this type. The new 3.9 release now also contains type statements for articles without infobox that were inferred based on the link structure within the DBpedia knowledge base using the algorithm described in Paulheim/Bizer 2013. Applying the algorithm allowed us to provide type information for 440,000 concepts that were formerly not typed. A similar algorithm was also used to identify and remove potentially wrong links from the knowledge base.

4. New and updated RDF Links into External Data Sources

We added RDF links to Wikidata and updated the following RDF link sets pointing at other Linked Data sources: YAGO, FreebaseGeonamesGADM and EUNIS. For an overview about all data sets that are interlinked from DBpedia please refer to DBpedia Interlinking.

5. New Find Related Concepts Service

We offer a new service for finding resources that are related to a given DBpedia seed resource. More information about the service is found at DBpedia FindRelated.

Accessing the DBpedia 3.9  Release

You can download the new DBpedia datasets from http://wiki.dbpedia.org/Downloads39.

As usual, the dataset is also available as Linked Data and via the DBpedia SPARQL endpoint at http://dbpedia.org/sparql

Credits

Lots of thanks to

  • Jona Christopher Sahnwaldt (Freelancer funded by the University of Mannheim, Germany) for improving the DBpedia extraction framework, for extracting the DBpedia 3.9 data sets for all 119 languages, and for generating the updated RDF links to external data sets.
  • All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
  • Heiko Paulheim (University of Mannheim, Germany) for inventing and implementing the algorithm to generate additional type statements for formerly untyped resources.
  • The whole Internationalization Committee for pushing the DBpedia internationalization forward.
  • Dimitris Kontokostas (University of Leipzig) for improving the DBpedia extraction framework and loading the new release onto the DBpedia download server in Leipzig.
  • Volha Bryl (University of Mannheim, Germany) for generating the statistics about the new release.
  • Petar Ristoski (University of Mannheim, Germany) for generating the updated links pointing at the GADM database of Global Administrative Areas.
  • Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software) for loading the new data set into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint.
  • OpenLink Software (http://www.openlinksw.com/) altogether for providing the server infrastructure for DBpedia.
  • Julien Cojan, Andrea Di Menna, Ahmed Ktob, Julien Plu, Jim Regan and others who contributed improvements to the DBpedia extraction framework via the source code repository on GitHub.

The work on the DBpedia 3.9 release was financially supported by the European Commission through the project LOD2 – Creating Knowledge out of Linked Data (http://lod2.eu/).

More information about DBpedia is found at http://dbpedia.org/About as well as in the new overview article about the project.

Have fun with the new DBpedia release!

Cheers,

Chris Bizer and Christopher Sahnwaldt

DBpedia 3.8 released, including enlarged Ontology and additional localized Versions

Hi all,

we are happy to announce the release of DBpedia 3.8.


The most important improvements of the new release compared to DBpedia 3.7 are:

1. the new release is based on updated Wikipedia dumps dating from late May / early June 2012.
2. the DBpedia ontology is enlarged and the number of infobox to ontology mappings has risen.
3. the DBpedia internationalization has progressed and we now provide localized versions of DBpedia in even more languages.

The English version of the DBpedia knowledge base currently describes 3.77 million things, out of which 2.35 million are classified in a consistent
Ontology, including 764,000 persons, 573,000 places (including 387,000 populated places), 333,000 creative works (including 112,000 music albums, 72,000 films and 18,000 video games), 192,000 organizations (including 45,000 companies and 42,000 educational institutions), 202,000 species and 5,500 diseases.

We provide localized versions of DBpedia in 111 languages. All these versions together describe 20.8 million things, out of which 10.5 mio overlap (are interlinked) with concepts from the English DBpedia. The full DBpedia data set features labels and abstracts for 10.3 million unique things in 111 different languages; 8.0 million links to images and 24.4 million HTML links to external web pages; 27.2 million data links into external RDF data sets, 55.8 million links to Wikipedia categories, and 8.2 million YAGO categories. The dataset consists of 1.89 billion pieces of information (RDF triples) out of which 400 million were extracted from the English edition of Wikipedia, 1.46 billion were extracted from other language editions, and about 27 million are data links into external RDF data sets.

The main changes between DBpedia 3.7 and 3.8 are described below. For additional, more detailed information please refer to the change log.

1. Enlarged Ontology

The DBpedia community added many new classes and properties on the
mappings wiki. The DBpedia 3.8 ontology encompasses

  • 359 classes (DBpedia 3.7: 319)
  • 800 object properties (DBpedia 3.7: 750)
  • 859 datatype properties (DBpedia 3.7: 791)
  • 116 specialized datatype properties (DBpedia 3.7: 102)
  • 45 owl:equivalentClass and 31 owl:equivalentProperty mappings to
    http://schema.org


2. Additional Infobox to Ontology Mappings

The editors of the
mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 3.8 extraction, we used 2347 mappings, among them

  • Polish: 382 mappings
  • English: 345 mappings
  • German: 211 mappings
  • Portuguese: 207 mappings
  • Greek: 180 mappings
  • Slovenian: 170 mappings
  • Korean: 146 mappings
  • Hungarian: 111 mappings
  • Spanish: 107 mappings
  • Turkish: 91 mappings
  • Czech: 66 mappings
  • Bulgarian: 61 mappings
  • Catalan: 52 mappings
  • Arabic: 51 mappings


3. New local DBpedia Chapters


We are also happy to see the number of local DBpedia chapters in different countries rising. Since the 3.7 DBpedia release we welcomed the French, Italian and Japanese Chapters. In addition, we expect the Dutch DBpedia chapter to go online during the next months (in cooperation with http://bibliotheek.nl/). The DBpedia chapters provide local SPARQL endpoints and dereferencable URIs for the DBpedia data in their corresponding language. The DBpedia Internationalization page provides an overview of the current state of the DBpedia Internationalization effort.

4. New and updated RDF Links into External Data Sources

We have added new RDF links pointing at resources in the following Linked Data sources: Amsterdam Museum, BBC Wildlife Finder, CORDIS, DBTune, Eurostat (Linked Statistics), GADM, LinkedGeoData, OpenEI (Open Energy Info). In addition, we have updated many of the existing RDF links pointing at other Linked Data sources.


5. New Wiktionary2RDF Extractor

We developed a DBpedia extractor, that is configurable for any Wiktionary edition. It generates an comprehensive ontology about languages for use as a semantic lexical resource in linguistics. The data currently includes language, part of speech, senses with definitions, synonyms, taxonomies (hyponyms, hyperonyms, synonyms, antonyms) and translations for each lexical word. It furthermore is hosted as Linked Data and can serve as a central linking hub for LOD in linguistics. Currently available languages are English, German, French, Russian. In the next weeks we plan to add Vietnamese and Arabic. The goal is to allow the addition of languages just by configuration without the need of programming skills, enabling collaboration as in the Mappings Wiki. For more information visit http://wiktionary.dbpedia.org/

6. Improvements to the Data Extraction Framework

  • Additionally to N-Triples and N-Quads, the framework was extended to write triple files in Turtle format
  • Extraction steps that looked for links between different Wikipedia editions were replaced by more powerful post-processing scripts
  • Preparation time and effort for abstract extraction is minimized, extraction time is reduced to a few milliseconds per page
  • To save file system space, the framework can compress DBpedia triple files while writing and decompress Wikipedia XML dump files while reading
  • Using some bit twiddling, we can now load all ~200 million inter-language links into a few GB of RAM and analyze them
  • Users can download ontology and mappings from mappings wiki and store them in files to avoid downloading them for each extraction, which takes a lot of time and makes extraction results less reproducible
  • We now use IRIs for all languages except English, which uses URIs for backwards compatibility
  • We now resolve redirects in all datasets where the objects URIs are DBpedia resources
  • We check that extracted dates are valid (e.g. February never has 30 days) and its format is valid according to its XML Schema type, e.g. xsd:gYearMonth
  • We improved the removal of HTML character references from the abstracts
  • When extracting raw infobox properties, we make sure that predicate URI can be used in RDF/XML by appending an underscore if necessary
  • Page IDs and Revision IDs datasets now use the DBpedia resource as subject URI, not the Wikipedia page URL
  • We use foaf:isPrimaryTopicOf instead of foaf:page for the link from DBpedia resource to Wikipedia page
  • New inter-language link datasets for all languages


Accessing the DBpedia 3.8  Release

You can download the new DBpedia dataset from
http://dbpedia.org/Downloads38.

As usual, the dataset is also available as Linked Data and via the DBpedia SPARQL endpoint at
http://dbpedia.org/sparql

Credits

Lots of thanks to

  • Jona Christopher Sahnwaldt (Freie Universität Berlin, Germany) for improving the DBpedia extraction framework and for extracting the DBpedia 3.8 data sets.
  • Dimitris Kontokostas (Aristotle University of Thessaloniki, Greece) for implementing the language generalizations to the extraction framework.
  • Uli Zellbeck and Anja Jentzsch (Freie Universität Berlin, Germany) for generating the new and updated RDF links to external datasets using the Silk interlinking framework.
  • Jonas Brekle (Universität Leipzig, Germany) and Sebastian Hellmann (Universität Leipzig, Germany)for their work on the new Wikionary2RDF extractor.
  • All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
  • The whole Internationalization Committee for pushing the DBpedia internationalization forward.
  • Kingsley Idehen and Patrick van Kleef (both OpenLink Software) for loading the dataset into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint. OpenLink Software (http://www.openlinksw.com/) altogether for providing the server infrastructure for DBpedia.

The work on the DBpedia 3.8 release was financially supported by the European Commission through the projects LOD2 – Creating Knowledge out of Linked Data (http://lod2.eu/, improvements to the extraction framework) and LATC – LOD Around the Clock (http://latc-project.eu/, creation of external RDF links).

More information about DBpedia is found at http://dbpedia.org/About

Have fun with the new DBpedia release!

Cheers,

Chris Bizer

DBpedia 3.7 released, including 15 localized Editions

Hi all,

we are happy to announce the release of DBpedia 3.7. The new release is based on Wikipedia dumps dating from late July 2011.

The new DBpedia data set describes more than 3.64 million things, of which 1.83 million are classified in a consistent ontology, including 416,000 persons, 526,000 places, 106,000 music albums, 60,000 films, 17,500 video games, 169,000 organizations, 183,000 species and 5,400 diseases.

The DBpedia data set features labels and abstracts for 3.64 million things in up to 97 different languages; 2,724,000 links to images and 6,300,000 links to external web pages; 6,200,000 external links into other RDF datasets, and 740,000 Wikipedia categories. The dataset consists of 1 billion pieces of information (RDF triples) out of which 385 million were extracted from the English edition of Wikipedia and roughly 665 million were extracted from other language editions and links to external datasets.

Localized Editions

Up till now, we extracted data from non-English Wikipedia pages only if there exists an equivalent English page, as we wanted to have a single URI to identify a resource across all 97 languages. However, since there are many pages in the non-English Wikipedia editions that do not have an equivalent English page (especially small towns in different countries, e.g. the Austrian village Endach, or legal and administrative terms that are just relevant for a single country) relying on English URIs only had the negative effect that DBpedia did not contain data for these entities and many DBpedia users have complained about this shortcoming.

As part of the DBpedia 3.7 release, we now provide 15 localized DBpedia editions for download that contain data from all Wikipedia pages in a specific language. These localized editions cover the following languages: ca, de, el, es, fr, ga, hr, hu, it, nl, pl, pt, ru, sl, tr. The URIs identifying entities in these i18n data sets are constructed directly from the non-English title and a language-specific URI namespaces (e.g. http://ru.dbpedia.org/resource/Berlin), so there are now 16 different URIs in DBpedia that refer to Berlin. We also extract the inter-language links from the different Wikipedia editions. Thus, whenever a inter-language links between a non-English Wikipedia page and its English equivalent exists, the resulting owl:sameAs link can be used to relate the localized DBpedia URI to the equivalent in the main (English) DBpedia edition. The localized DBpedia editions are provided for download on the DBpedia download page (http://wiki.dbpedia.org/Downloads37). Note that we have not provide public SPARQL endpoints for the localized editions, nor do the localized URIs dereference. This might change in the future, as more local DBpedia chapters are set up in different countries as part of the DBpedia internationalization effort (http://dbpedia.org/Internationalization).

Other Changes

Beside the new localized editions, the DBpedia 3.7 release provides the following improvements and changes compared to the last release:

1. Framework

  • Redirects are resolved in a post-processing step for increased inter-connectivity of 13% (applied for English data sets)
  • Extractor configuration using the dependency injection principle
  • Simple threaded loading of mappings in server
  • Improved international language parsing support thanks to the members of the Internationalization Committee: http://dbpedia.org/Internationalization

2. Bugfixes

  • Encode homepage URLs to conform with N-Triples spec
  • Correct reference parsing
  • Recognize MediaWiki parser functions
  • Raw infobox extraction produces more object properties again
  • skos:related for category links starting with “:” and having and anchor text
  • Restrict objects to Main namespace in MappingExtractor
  • Double rounding (e.g. a person’s height should not be 1800.00000001 cm)
  • Start position in abstract extractor
  • Server can handle template names containing a slash
  • Encoding issues in YAGO dumps

3. Ontology

  • 320 ontology classes
  • 750 object properties
  • 893 datatype properties
  • owl:equivalentClass and owl:equivalentProperty mappings to http://schema.org

Note that the ontology now is a directed-acyclic graph. Classes can have multiple superclasses, which was important for the mappings to schema.org. A taxonomy can still be constructed by ignoring all superclass but the one that is specified first in the list and is considered the most important.

4. Mappings

  • Dynamic statistics for infobox mappings showing the overall and individual coverage of the mappings in each language: http://mappings.dbpedia.org/index.php/Mapping_Statistics
  • Improved DBpedia Ontology as well as improved Infobox mappings using http://mappings.dbpedia.org/. These improvements are largely due to collective work by the community before and during the DBpedia Mapping Creation Sprint. For English, there are 17.5 million RDF statements based on mappings (13.8 million in version 3.6) (see also http://dbpedia.org/Downloads37#ontologyinfoboxproperties).
  • ConstantProperty mappings to capture information from the template title (e.g. Infobox_Australian_Road {{TemplateMapping | mapToClass = Road | mappings = {{ConstantMapping | ontologyProperty = country | value = Australia }}}})
  • Language specification for string properties in PropertyMappings (e.g. Infobox_japan_station: {{PropertyMapping | templateProperty = name | ontologyProperty = foaf:name | language = ja}} )
  • Multiplication factor in PropertyMappings (e.g. Infobox_GB_station: {{PropertyMapping | templateProperty = usage0910 | ontologyProperty = passengersPerYear | factor = 1000000}}, because it’s always specified in millions)

5. RDF Links to External Data Sources

  • New RDF links pointing at resources in the following Linked Data sources: Umbel, EUnis, LinkedMDB, Geospecis
  • Updated RDF links pointing at resources in the following Linked Data sources: Freebase, WordNet, Opencyc, New York Times, Drugbank, Diseasome, Flickrwrapper, Sider, Factbook, DBLP, Eurostat, Dailymed, Revyu

Accessing the new DBpedia Release

You can download the new DBpedia dataset from http://dbpedia.org/Downloads37.

As usual, the dataset is also available as Linked Data and via the DBpedia SPARQL endpoint (http://dbpedia.org/sparql).

Credits

Lots of thanks to

  • All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
  • Max Jakob (Freie Universität Berlin, Germany) for improving the DBpedia extraction framework and for extracting the new datasets.
  • Dimitris Kontokostas (Aristotle University of Thessaloniki, Greece) for providing language generalizations to the extraction framework.
  • Paul Kreis (Freie Universität Berlin, Germany) for administering the ontology and for delivering the mapping statistics and schema.org mappings.
  • Uli Zellbeck (Freie Universität Berlin, Germany) for providing the links to external datasets using the Silk framework.
  • The whole Internationalization Committee for expanding some DBpedia extractors to a number of languages:
    http://dbpedia.org/Internationalization.
  • Kingsley Idehen and Mitko Iliev (both OpenLink Software) for loading the dataset into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint. OpenLink Software (http://www.openlinksw.com/) altogether for providing the server infrastructure for DBpedia.

The work on the new release was financially supported by:

  • The European Commission through the project LOD2 – Creating Knowledge out of Linked Data (http://lod2.eu/, improvements to the extraction framework).
  • The European Commission through the project LATC – LOD Around the Clock (http://latc-project.eu/, creation of external RDF links).
  • Vulcan Inc. as part of its Project Halo (http://www.projecthalo.com/).

More information about DBpedia is found at http://dbpedia.org/About

Have fun with the new data set!

Cheers,

Chris Bizer