DBpedia Homepage | Blog | Sourceforge Page

DBpedia 3.7 released, including 15 localized Editions

September 11, 2011 - 11:14 am by ChrisBizer - One comment »

Hi all,

we are happy to announce the release of DBpedia 3.7. The new release is based on Wikipedia dumps dating from late July 2011.

The new DBpedia data set describes more than 3.64 million things, of which 1.83 million are classified in a consistent ontology, including 416,000 persons, 526,000 places, 106,000 music albums, 60,000 films, 17,500 video games, 169,000 organizations, 183,000 species and 5,400 diseases.

The DBpedia data set features labels and abstracts for 3.64 million things in up to 97 different languages; 2,724,000 links to images and 6,300,000 links to external web pages; 6,200,000 external links into other RDF datasets, and 740,000 Wikipedia categories. The dataset consists of 1 billion pieces of information (RDF triples) out of which 385 million were extracted from the English edition of Wikipedia and roughly 665 million were extracted from other language editions and links to external datasets.

Localized Editions

Up till now, we extracted data from non-English Wikipedia pages only if there exists an equivalent English page, as we wanted to have a single URI to identify a resource across all 97 languages. However, since there are many pages in the non-English Wikipedia editions that do not have an equivalent English page (especially small towns in different countries, e.g. the Austrian village Endach, or legal and administrative terms that are just relevant for a single country) relying on English URIs only had the negative effect that DBpedia did not contain data for these entities and many DBpedia users have complained about this shortcoming.

As part of the DBpedia 3.7 release, we now provide 15 localized DBpedia editions for download that contain data from all Wikipedia pages in a specific language. These localized editions cover the following languages: ca, de, el, es, fr, ga, hr, hu, it, nl, pl, pt, ru, sl, tr. The URIs identifying entities in these i18n data sets are constructed directly from the non-English title and a language-specific URI namespaces (e.g. http://ru.dbpedia.org/resource/Berlin), so there are now 16 different URIs in DBpedia that refer to Berlin. We also extract the inter-language links from the different Wikipedia editions. Thus, whenever a inter-language links between a non-English Wikipedia page and its English equivalent exists, the resulting owl:sameAs link can be used to relate the localized DBpedia URI to the equivalent in the main (English) DBpedia edition. The localized DBpedia editions are provided for download on the DBpedia download page (http://wiki.dbpedia.org/Downloads37). Note that we have not provide public SPARQL endpoints for the localized editions, nor do the localized URIs dereference. This might change in the future, as more local DBpedia chapters are set up in different countries as part of the DBpedia internationalization effort (http://dbpedia.org/Internationalization).

Other Changes

Beside the new localized editions, the DBpedia 3.7 release provides the following improvements and changes compared to the last release:

1. Framework

  • Redirects are resolved in a post-processing step for increased inter-connectivity of 13% (applied for English data sets)
  • Extractor configuration using the dependency injection principle
  • Simple threaded loading of mappings in server
  • Improved international language parsing support thanks to the members of the Internationalization Committee: http://dbpedia.org/Internationalization

2. Bugfixes

  • Encode homepage URLs to conform with N-Triples spec
  • Correct reference parsing
  • Recognize MediaWiki parser functions
  • Raw infobox extraction produces more object properties again
  • skos:related for category links starting with “:” and having and anchor text
  • Restrict objects to Main namespace in MappingExtractor
  • Double rounding (e.g. a person’s height should not be 1800.00000001 cm)
  • Start position in abstract extractor
  • Server can handle template names containing a slash
  • Encoding issues in YAGO dumps

3. Ontology

  • 320 ontology classes
  • 750 object properties
  • 893 datatype properties
  • owl:equivalentClass and owl:equivalentProperty mappings to http://schema.org

Note that the ontology now is a directed-acyclic graph. Classes can have multiple superclasses, which was important for the mappings to schema.org. A taxonomy can still be constructed by ignoring all superclass but the one that is specified first in the list and is considered the most important.

4. Mappings

  • Dynamic statistics for infobox mappings showing the overall and individual coverage of the mappings in each language: http://mappings.dbpedia.org/index.php/Mapping_Statistics
  • Improved DBpedia Ontology as well as improved Infobox mappings using http://mappings.dbpedia.org/. These improvements are largely due to collective work by the community before and during the DBpedia Mapping Creation Sprint. For English, there are 17.5 million RDF statements based on mappings (13.8 million in version 3.6) (see also http://dbpedia.org/Downloads37#ontologyinfoboxproperties).
  • ConstantProperty mappings to capture information from the template title (e.g. Infobox_Australian_Road {{TemplateMapping | mapToClass = Road | mappings = {{ConstantMapping | ontologyProperty = country | value = Australia }}}})
  • Language specification for string properties in PropertyMappings (e.g. Infobox_japan_station: {{PropertyMapping | templateProperty = name | ontologyProperty = foaf:name | language = ja}} )
  • Multiplication factor in PropertyMappings (e.g. Infobox_GB_station: {{PropertyMapping | templateProperty = usage0910 | ontologyProperty = passengersPerYear | factor = 1000000}}, because it’s always specified in millions)

5. RDF Links to External Data Sources

  • New RDF links pointing at resources in the following Linked Data sources: Umbel, EUnis, LinkedMDB, Geospecis
  • Updated RDF links pointing at resources in the following Linked Data sources: Freebase, WordNet, Opencyc, New York Times, Drugbank, Diseasome, Flickrwrapper, Sider, Factbook, DBLP, Eurostat, Dailymed, Revyu

Accessing the new DBpedia Release

You can download the new DBpedia dataset from http://dbpedia.org/Downloads37.

As usual, the dataset is also available as Linked Data and via the DBpedia SPARQL endpoint (http://dbpedia.org/sparql).

Credits

Lots of thanks to

  • All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
  • Max Jakob (Freie Universität Berlin, Germany) for improving the DBpedia extraction framework and for extracting the new datasets.
  • Dimitris Kontokostas (Aristotle University of Thessaloniki, Greece) for providing language generalizations to the extraction framework.
  • Paul Kreis (Freie Universität Berlin, Germany) for administering the ontology and for delivering the mapping statistics and schema.org mappings.
  • Uli Zellbeck (Freie Universität Berlin, Germany) for providing the links to external datasets using the Silk framework.
  • The whole Internationalization Committee for expanding some DBpedia extractors to a number of languages:
    http://dbpedia.org/Internationalization.
  • Kingsley Idehen and Mitko Iliev (both OpenLink Software) for loading the dataset into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint. OpenLink Software (http://www.openlinksw.com/) altogether for providing the server infrastructure for DBpedia.

The work on the new release was financially supported by:

  • The European Commission through the project LOD2 - Creating Knowledge out of Linked Data (http://lod2.eu/, improvements to the extraction framework).
  • The European Commission through the project LATC - LOD Around the Clock (http://latc-project.eu/, creation of external RDF links).
  • Vulcan Inc. as part of its Project Halo (http://www.projecthalo.com/).

More information about DBpedia is found at http://dbpedia.org/About

Have fun with the new data set!

Cheers,

Chris Bizer

Official DBpedia Live Release

July 9, 2011 - 12:50 pm by Sören - No comments »

We are pleased to announce the official release of DBpedia Live. The main objective of DBpedia is to extract structured information from Wikipedia, convert it into RDF, and make it freely available on the Web. In a nutshell, DBpedia is the Semantic Web mirror of Wikipedia.

Wikipedia users constantly revise Wikipedia articles with updates happening almost each second. Hence, data stored in the official DBpedia endpoint can quickly become outdated, and Wikipedia articles need to be re-extracted. DBpedia Live enables such a continuous synchronization between DBpedia and Wikipedia.

The DBpedia Live framework has the following new features:

  1. Migration from the previous PHP framework to the new Java/Scala DBpedia framework.
  2. Support of clean abstract extraction.
  3. Automatic reprocessing of all pages affected by a schema mapping change at http://mappings.dbpedia.org.
  4. Automatic reprocessing of pages that are not changed for more than one month. The main objective of that feature is to that any change in the DBpedia framework, e.g. addition/change of an extractor, will eventually affect all extracted resources. It also serves as fallback for technical problems in Wikipedia or the update stream.
  5. Publication of all changesets.
  6. Provision of a tool to enable other DBpedia mirrors to be in synchronization with our DBpedia Live endpoint. The tool continuously downloads changesets and performs changes in a specified triple store accordingly.

Important Links:

Thanks a lot to Mohamed Morsey, who implemented this version of DBpedia Live as well as to Sebastian Hellmann and Claus Stadler who worked on its predecessor. We also thank our partners at the FU Berlin and OpenLink as well as the LOD2 project for their support.

OpenData Challenge awards 20.000€ prizes to open public data apps

May 6, 2011 - 9:23 am by Sören - No comments »

European public bodies produce thousands upon thousands of datasets every year - about everything from how our tax money is spent to the quality of the air we breathe.

The Opendata competition aims to challenge designers, developers, journalists, researchers and the general public to come up with something useful, valuable or interesting using open public data.

There are four main strands to the competition:

  • Ideas – Anyone can suggest an idea for projects which reuse public information to do something interesting or useful.
  • Apps – Teams of developers can submit working applications which reuse public information.
  • Visualisations – Designers, artists and others can submit interesting or insightful visual representations of public information.
  • Datasets - We encourage the submission of any form of open datasets produced by public governmental bodies, either submitted directly by the public body or by developers or others who have transformed, cleaned or interlinked the data.
  •  

    The competition is open til 5th June midnight. The winners will be selected by an all star cast of open data gurus - and announced in mid June at the European Digital Assembly in Brussels. More information can be found at: http://opendatachallenge.org/

    DBpedia Spotlight - Text Annotation Toolkit released

    February 15, 2011 - 2:53 pm by ChrisBizer - No comments »

    We are happy to announce a first release of DBpedia Spotlight - Shedding Light on the Web of Documents. 

    The amount of data in the Linked Open Data cloud is steadily increasing. Interlinking text documents with this data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. 

    DBpedia Spotlight is a tool for annotating mentions of DBpedia resources in text, providing a solution for linking unstructured information sources to the Linked Open Data cloud through DBpedia. The DBpedia Spotlight Architecture is composed by the following modules:

    • Web application, a demonstration client (HTML/Javascript UI) that allows users to enter/paste text into a Web browser and visualize the resulting annotated text.

    • Web Service, a RESTful Web API that exposes the functionality of annotating and/or disambiguating entities in text. The service returns XML, JSON or RDF.

    • Annotation Java / Scala API, exposing the underlying logic that performs the annotation/disambiguation.

    • Indexing Java / Scala API, executing the data processing necessary to enable the annotation/disambiguation algorithms used.

    More information about DBpedia Spotlight can be found at: 

    http://spotlight.dbpedia.org 

    DBpedia Spotlight is provided under the terms of the Apache License, Version 2.0. Part of the code uses LingPipe under the Royalty Free License.

     

    The source code can be downloaded from: 

    http://sourceforge.net/projects/dbp-spotlight 

    The development of DBpedia Spotlight was supported by: 

    • Neofonie GmbH, a Berlin-based company offering leading technologies in the area of Web search, social media and mobile applications (http://www.neofonie.de/).

    • The European Commission through the project LOD2 – Creating Knowledge out of Linked Data (http://lod2.eu/). 

    Lots of thanks to:

    • Andreas Schultz for his help with the SPARQL endpoint.

    • Paul Kreis for his help with evaluations.

    • Robert Isele and Anja Jentzsch for their help in early stages with the DBpedia extraction framework.

    Cheers,

     Pablo N. Mendes, Max Jakob, Andrés García-Silva and Chris Bizer.

    DBpedia 3.6 AMI Available

    January 31, 2011 - 3:46 pm by kidehen@openlinksw.com - No comments »

    In line with prior releases of DBpedia, there is a new 3.6 edition of the DBpedia AMI available from Amazon EC2.

    What is a DBpedia AMI?

    A preconfigured Virtuoso Cluster Edition database that includes a preloaded DBpedia dataset. The entire deliverable is packaged as an Amazon Machine Instance (AMI); which is a cloud hosted virtual machine.

    Why is it Important?

    It enables you to productively exploit the power of the DBpedia within minutes. Basically, you can make DBpedia instances that serve you personal or service  specific needs. Thus, you do not have to constrain your use of DBpedia via the live instance which is configured for Web Scale use, based on server side constraints that affect concurrent connections, query timeouts, and result set sizes.

    How do I use it?

    Simply follow the instructions in the DBpedia AMI guide which boils down to:

    1. Instantiating a Virtuoso EC2 AMI
    2. Mounting the Amazon Elastic Block Storage (EBS) snapshot that hosts the preloaded Virtuoso Database.

    Enjoy!

    DBpedia 3.6 released

    January 17, 2011 - 2:01 pm by ChrisBizer - No comments »

    Hi all, 

    we are happy to announce the release of DBpedia 3.6. The new release is based on Wikipedia dumps dating from October/November 2010. 

     The new DBpedia dataset describes more than 3.5 million things, of which 1.67 million are classified in a consistent ontology, including 364,000 persons, 462,000 places, 99,000 music albums, 54,000 films, 16,500 video games, 148,000 organizations, 148,000 species and 5,200 diseases.  The DBpedia dataset features labels and abstracts for 3.5 million things in up to 97 different languages; 1,850,000 links to images and 5,900,000 links to external web pages; 6,500,000 external links into other RDF datasets, and 632,000 Wikipedia categories.  

    The dataset consists of 672 million pieces of information (RDF triples) out of which 286 million were extracted from the English edition of Wikipedia and 386 million were extracted from other language editions and links to external datasets.  

    Along with the release of the new datasets, we are happy to announce the initial release of the DBpedia MappingTool (http://mappings.dbpedia.org/index.php/MappingTool): a graphical user interface to support the community in creating and editing mappings as well as the ontology.  

    The new release provides the following improvements and changes compared to the DBpedia 3.5.1 release:  

    1. Improved DBpedia Ontology as well as improved Infobox mappings using http://mappings.dbpedia.org/  

    Furthermore, there are now also mappings in languages other than English. These improvements are largely due to collective work by the community. There are 13.8 million RDF statements based on mappings (11.1 million in version 3.5.1). All this data is in the /ontology/ namespace. Note that this data is of much higher quality than the Raw Infobox data in the /property/ namespace.  

    Statistics of the mappings wiki on the date of release 3.6:  

    Mappings:     

    • English: 315 Infobox mappings (covers 1124 templates including redirects)     
    • Greek: 137 Infobox mappings (covers 192 templates including redirects)     
    • Hungarian: 111 Infobox mappings (covers 151 templates including redirects)     
    • Croatian: 36 Infobox mappings (covers 67 templates including redirects)     
    • German: 9 Infobox mappings
    • Slovenian: 4 Infobox mappings

    Ontology:     

    • 272 classes

    Properties:     

    • 629 object properties     
    • 706 datatype properties (they are all in the /datatype/ namespace)  

    2.  Some commonly used property names changed  

    Please see http://dbpedia.org/ChangeLog and http://dbpedia.org/Datasets/Properties to know which relations changed and update your applications accordingly!  

    3. New Datatypes for increased quality in mapping-based properties  

    • xsd:positiveInteger, xsd:nonNegativeInteger, xsd:nonPositiveInteger, xsd:negativeInteger 

    4. Improved parsing coverage 

    • Parsing of lists of elements in Infobox property values that improves the completeness of extracted facts
    • Method to deal with missing repeated links in Infoboxes that do appear somewhere else on the page.
    • Flag templates are parsed.
    • Various improvements on internationalization.  

    5. Improved recognition of  

    • Wikipedia language codes.
    • Wikipedia namespace identifiers.
    • Category hierarchies.  

    6. Disambiguation links for acronyms (all upper-case title) are now extracted (for example, Kilobyte and Knowledge_base for “KB”):  

    • Wikilinks consisting of multiple words: If the starting letters of the words appear in correct order (with possible gaps) and cover all acronym letters.
    • Wikilinks consisting of a single word: If the case-insensitive longest common subsequence with the acronym is equal to the acronym. 

    7. New ‘Geo-Related’ Extractor

    • Relates articles to resources of countries, whose label appear in the name of the articles’ categories.

    8. Encoding (bugfixes) 

    • The new datasets support the complete range of Unicode code points (up to 0×10ffff). 16-bit code points start with ‘\u’, code points larger than 16-bits start with ‘\U’.
    • Commas and ampersands do not get encoded anymore in URIs. Please see http://dbpedia.org/URIencoding for an explanation regarding the DBpedia URI encoding scheme.  

    9. Extended Datasets 

    • Thanks to Johannes Hoffart (Max-Planck-Institut für Informatik) for contributing links to YAGO2.
    • Freebase links have been updated. They now refer to mids (http://wiki.freebase.com/wiki/Machine_ID) because guids have been deprecated.  

    You can download the new DBpedia dataset from http://dbpedia.org/Downloads36 

    As usual, the dataset is also available as Linked Data and via the DBpedia SPARQL endpoint at http://dbpedia.org/sparql 

    Lots of thanks to:  

    • All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
    • Max Jakob (Freie Universität Berlin, Germany) for improving the DBpedia extraction framework and for extracting the new datasets.
    • Robert Isele and Anja Jentzsch (both Freie Universität Berlin, Germany) for helping Max with their expertise on the extraction framework.
    • Paul Kreis (Freie Universität Berlin, Germany) for analyzing the DBpedia data of the previous release and suggesting ways to increase quality and quantity. Some results of his work were implemented in this release.
    • Dimitris Kontokostas (Aristotle University of Thessaloniki, Greece), Jimmy O’Regan (Eolaistriu Technologies, Ireland), José Paulo Leal (University of Porto, Portugal) for providing patches to improve the extraction framework.
    • Claus Stadler (Universität Leipzig, Germany) for implementing the Geo-Related extractor and extracting its data.
    • Jens Lehmann and Sören Auer (both Universität Leipzig, Germany) for providing the new dataset via the DBpedia download server at Universität Leipzig.
    • Kingsley Idehen and Mitko Iliev (both OpenLink Software) for loading the dataset into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint. OpenLink Software (http://www.openlinksw.com/) altogether for providing the server infrastructure for DBpedia.  

    The work on the new release was financially supported by  

    • Neofonie GmbH, a Berlin-based company offering leading technologies in the area of Web search, social media and mobile applications (http://www.neofonie.de/).
    • The European Commission through the project LOD2 - Creating Knowledge out of Linked Data (http://lod2.eu/).
    • Vulcan Inc. as part of its Project Halo (http://www.projecthalo.com/). Vulcan Inc. creates and advances a variety of world-class endeavors and high impact initiatives that change and improve the way we live, learn, do business (http://www.vulcan.com/). More information about DBpedia is found at http://dbpedia.org/About  

    Have fun with the new dataset!  

    The whole DBpedia team also congratulates Wikipedia to its 10th Birthday which was this weekend!  

    Cheers,  

    Chris Bizer

    Links to DBpedia from Ontos NLP web services

    October 20, 2010 - 10:45 am by Sören - No comments »

    The NLP specialist Ontos extends the quality and amount of information for developers by integrating its news portal into the Linked Data Cloud. Ontos’ GUIDs for objects are now dereferencable - the resulting RDF contains owl:sameAs-attributes to DBpedia, Freebase and others (cf. e.g the entry for Barack Obama).

    Within the news portal Ontos crawls news articles from diverse online sources, uses its cutting-edge NLP technology to extract facts (objects and relations between them), merges these information with existing ones and stores them including respective references to the original news article - all of this fully automatically. Facts from Ontos’ portal are accessible via a RESTful HTTP API. Fetching data is free - in order to receive an API key, developers have to register (e-mail address only!) at Ontos’ homepage.

    For humans Ontos provides a search interface at http://www.ontosearch.com. It allows to look-up objects in the database and viewing respective summaries in HTML or RDF.

    Please note that the generated RDF does currently contain a small part of existing information (e. g. no article references yet). Ontos will extend the respective content step-by-step.

    DBpedia 3.5.1 available on Amazon EC2

    August 10, 2010 - 8:49 am by ChrisBizer - No comments »

    As the Amazon Web Services are getting used a lot for cloud computing, we have started to provide current snapshots of the DBpedia dataset for this environment.

    We provide the DBpedia dataset for Amazon Web Services in two ways:

    1. Source files for being mounted:  http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2319

    2. Virtuoso SPARQL store for being instanciated: http://www.openlinksw.com/dataspace/dav/wiki/Main/VirtAWSDBpedia351C 

    DBpedia 3.5.1 released

    April 28, 2010 - 7:18 pm by AnjaJentzsch - No comments »

    Hi all,

    we are happy to announce the release of DBpedia 3.5.1.

    This is primarily a bugfix release, which is based on Wikipedia dumps dating from March 2010. Thanks to the great community feedback about the previous DBpedia release, we were able to resolve the reported issues as well as to improve template to ontology mappings.

    The new release provides the following improvements and changes compared to the DBpedia 3.5 release:

    1. Some abstracts contained unwanted WikiText markup. The detection of infoboxes and tables has been improved, so that even most pages with syntax errors have clean abstracts now.
    2. In 3.5 there has been an issue detecting interlanguage links, which led to some non-english statements having the wrong subject. This has been fixed.
    3. Image references to dummy images (e.g. http://en.wikipedia.org/wiki/Image:Replace_this_image.svg) have been removed.
    4. DBpedia 3.5.1 uses a stricter IRI validation now. Care has been taken to only discard URIs from Wikipedia, which are clearly invalid.
    5. Recognition of disambiguation pages has been improved, increasing the size from 247,000 to 769,000 triples.
    6. More geographic coordinates are extracted now, increasing its number from 1,200,000 to 1,500,000 in the english version.
    7. For this release, all Freebase links have been regenerated from the most recent freebase dump.

    You can download the new DBpedia dataset from http://wiki.dbpedia.org/Downloads351. As usual, the data set is also available as Linked Data and via the DBpedia SPARQL endpoint.

    Lots of thanks to:

    • Jens Lehmann and Sören Auer (both Universität Leipzig) for providing the knowledge base via the DBpedia download server at Universität Leipzig.
    • Kingsley Idehen and Mitko Iliev (both OpenLink Software) for loading the knowledge base into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint.

    The whole DBpedia team is very thankful to three companies which enabled us to do all this by supporting and sponsoring the DBpedia project:

    • Neofonie GmbH (http://www.neofonie.de), a Berlin-based company offering leading technologies in the area of Web search, social media and mobile applications.
    • Vulcan Inc. as part of its Project Halo (http://www.projecthalo.com). Vulcan Inc. creates and advances a variety of world-class endeavors and high impact initiatives that change and improve the way we live, learn, do business (http://www.vulcan.com).
    • OpenLink Software (http://www.openlinksw.com). OpenLink Software develops the Virtuoso Universal Server, an innovative enterprise grade server that cost-effectively delivers an unrivaled platform for Data Access, Integration and Management.

    More information about DBpedia is found at http://dbpedia.org/About

    Have fun with the new DBpedia knowledge base!

    Cheers,

    Robert Isele and Anja Jentzsch

    DBpedia 3.5 released

    April 12, 2010 - 11:28 am by ChrisBizer - No comments »

    Hi all,

    we are happy to announce the release of DBpedia 3.5. The new release is based on Wikipedia dumps dating from March 2010. Compared to the 3.4 release, we were able to increase the quality of the DBpedia knowledge base by employing a new data extraction framework which applies various data cleansing heuristics as well as by extending the infobox-to-ontology mappings that guide the data extraction process.

    The new DBpedia knowledge base describes more than 3.4 million things, out of which 1.47 million are classified in a consistent ontology, including 312,000 persons, 413,000 places, 94,000 music albums, 49,000 films, 15,000 video games, 140,000 organizations, 146,000 species and 4,600 diseases. The DBpedia data set features labels and abstracts for these 3.2 million things in up to 92 different languages; 1,460,000 links to images and 5,543,000 links to external web pages; 4,887,000 external links into other RDF datasets, 565,000 Wikipedia categories, and 75,000 YAGO categories. The DBpedia knowledge base altogether consists of over 1 billion pieces of information (RDF triples) out of which 257 million were extracted from the English edition of Wikipedia and 766 million were extracted from other language editions.

    The new release provides the following improvements and changes compared to the DBpedia 3.4 release:

    1. The DBpedia extraction framework has been completely rewritten in Scala. The new framework dramatically reduces the extraction time of a single Wikipedia article from over 200 to about 13 milliseconds. All features of the previous PHP framework have been ported. In addition, the new framework can extract data from Wikipedia tables based on table-to-ontology mappings and is able to extract multiple infoboxes out of a single Wikipedia article. The data from each infobox is represented as a separate RDF resource. All resources that are extracted from a single page can be connected using custom RDF properties which are also defined in the mappings. A lot of work also went into the value parsers and the DBpedia 3.5 dataset should therefore be much cleaner than its predecessors. In addition, units of measurement are normalized to their respective SI unit, which makes querying DBpedia easier.
    2. The mapping language that is used to map Wikipedia infoboxes to the DBpedia Ontology has been redesigned. The documentation of the new mapping language is found at http://dbpedia.svn.sourceforge.net/viewvc/dbpedia/trunk/extraction/core/doc/mapping%20language/
    3. In order to enable the DBpedia user community to extend and refine the infobox to ontology mappings, the mappings can be edited on the newly created wiki hosted on http://mappings.dbpedia.org.  At the moment, 303 template mappings are defined, which cover (including redirects) 1055 templates. On the wiki, the DBpedia Ontology can be edited by the community as well. At the moment, the ontology consists of 259 classes and about 1,200 properties. 
    4. The ontology properties extracted from infoboxes are now split into two data sets (For details see: http://wiki.dbpedia.org/Datasets):  1. The Ontology Infobox Properties dataset contains the properties as they are defined in the ontology (e.g. length). The range of a property is either an xsd schema type or a dimension of measurement, in which case the value is normalized to the respective SI unit. 2. The Ontology Infobox Properties (Specific) dataset contains properties which have been specialized for a specific class using a specific unit. e.g. the property height is specialized on the class Person using the unit centimeters instead of meters.
    5. The framework now resolves template redirects, making it possible to cover all redirects to an infobox on Wikipedia with a single mapping. 
    6. Three new extractors have been implemented:  1. PageIdExtractor extracting Wikipedia page IDs are extracted for each page. 2. RevisionExtractor extracting the latest revision of a page. 3. PNDExtractor extracting PND (Personnamendatei) identifiers.
    7. The data set now provides labels, abstracts, page links and infobox data in 92 different languages, which have been extracted from recent Wikipedia dumps as of March 2010.
    8. In addition the N-Triples datasets, N-Quads datasets are provided which include a provenance URI to each statement. The provenance URI denotes the origin of the extracted triple in Wikipedia (For details see: http://wiki.dbpedia.org/Datasets).You can download the new DBpedia dataset from http://wiki.dbpedia.org/Downloads35. As usual, the data set is also available as Linked Data and via the DBpedia SPARQL endpoint.

    Lots of thanks to:

    • Robert Isele, Anja Jentzsch, Christopher Sahnwaldt, and Paul Kreis (all Freie Universität Berlin) for reimplementing the DBpedia extraction framework in Scala, for extending the infobox-to-ontology mappings and for extracting the new DBpedia 3.5 knowledge base. 
    • Jens Lehmann and Sören Auer (both Universität Leipzig) for providing the knowledge base via the DBpedia download server at Universität Leipzig.
    • Kingsley Idehen and Mitko Iliev (both OpenLink Software) for loading the knowledge base into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint.

    The whole DBpedia team is very thankful to three companies which enabled us to do all this by supporting and sponsoring the DBpedia project:

    1. Neofonie GmbH (http://www.neofonie.de/index.jsp), a Berlin-based company offering leading technologies in the area of Web search, social media and mobile applications.
    2. Vulcan Inc. as part of its Project Halo (www.projecthalo.com). Vulcan Inc. creates and advances a variety of world-class endeavors and high impact initiatives that change and improve the way we live, learn, do business (http://www.vulcan.com/).
    3. OpenLink Software (http://www.openlinksw.com/). OpenLink Software develops the Virtuoso Universal Server, an innovative enterprise grade server that cost-effectively delivers an unrivaled platform for Data Access, Integration and Management.

    More information about DBpedia is found at http://dbpedia.org/About

    Have fun with the new DBpedia knowledge base!

    Cheers

    Chris Bizer