DBpedia Homepage | Blog | Sourceforge Page

OpenData Challenge awards 20.000€ prizes to open public data apps

May 6, 2011 - 9:23 am by Sören - No comments »

European public bodies produce thousands upon thousands of datasets every year - about everything from how our tax money is spent to the quality of the air we breathe.

The Opendata competition aims to challenge designers, developers, journalists, researchers and the general public to come up with something useful, valuable or interesting using open public data.

There are four main strands to the competition:

  • Ideas – Anyone can suggest an idea for projects which reuse public information to do something interesting or useful.
  • Apps – Teams of developers can submit working applications which reuse public information.
  • Visualisations – Designers, artists and others can submit interesting or insightful visual representations of public information.
  • Datasets - We encourage the submission of any form of open datasets produced by public governmental bodies, either submitted directly by the public body or by developers or others who have transformed, cleaned or interlinked the data.
  •  

    The competition is open til 5th June midnight. The winners will be selected by an all star cast of open data gurus - and announced in mid June at the European Digital Assembly in Brussels. More information can be found at: http://opendatachallenge.org/

    DBpedia Spotlight - Text Annotation Toolkit released

    February 15, 2011 - 2:53 pm by ChrisBizer - No comments »

    We are happy to announce a first release of DBpedia Spotlight - Shedding Light on the Web of Documents. 

    The amount of data in the Linked Open Data cloud is steadily increasing. Interlinking text documents with this data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. 

    DBpedia Spotlight is a tool for annotating mentions of DBpedia resources in text, providing a solution for linking unstructured information sources to the Linked Open Data cloud through DBpedia. The DBpedia Spotlight Architecture is composed by the following modules:

    • Web application, a demonstration client (HTML/Javascript UI) that allows users to enter/paste text into a Web browser and visualize the resulting annotated text.

    • Web Service, a RESTful Web API that exposes the functionality of annotating and/or disambiguating entities in text. The service returns XML, JSON or RDF.

    • Annotation Java / Scala API, exposing the underlying logic that performs the annotation/disambiguation.

    • Indexing Java / Scala API, executing the data processing necessary to enable the annotation/disambiguation algorithms used.

    More information about DBpedia Spotlight can be found at: 

    http://spotlight.dbpedia.org 

    DBpedia Spotlight is provided under the terms of the Apache License, Version 2.0. Part of the code uses LingPipe under the Royalty Free License.

     

    The source code can be downloaded from: 

    http://sourceforge.net/projects/dbp-spotlight 

    The development of DBpedia Spotlight was supported by: 

    • Neofonie GmbH, a Berlin-based company offering leading technologies in the area of Web search, social media and mobile applications (http://www.neofonie.de/).

    • The European Commission through the project LOD2 – Creating Knowledge out of Linked Data (http://lod2.eu/). 

    Lots of thanks to:

    • Andreas Schultz for his help with the SPARQL endpoint.

    • Paul Kreis for his help with evaluations.

    • Robert Isele and Anja Jentzsch for their help in early stages with the DBpedia extraction framework.

    Cheers,

     Pablo N. Mendes, Max Jakob, Andrés García-Silva and Chris Bizer.

    DBpedia 3.6 AMI Available

    January 31, 2011 - 3:46 pm by kidehen@openlinksw.com - No comments »

    In line with prior releases of DBpedia, there is a new 3.6 edition of the DBpedia AMI available from Amazon EC2.

    What is a DBpedia AMI?

    A preconfigured Virtuoso Cluster Edition database that includes a preloaded DBpedia dataset. The entire deliverable is packaged as an Amazon Machine Instance (AMI); which is a cloud hosted virtual machine.

    Why is it Important?

    It enables you to productively exploit the power of the DBpedia within minutes. Basically, you can make DBpedia instances that serve you personal or service  specific needs. Thus, you do not have to constrain your use of DBpedia via the live instance which is configured for Web Scale use, based on server side constraints that affect concurrent connections, query timeouts, and result set sizes.

    How do I use it?

    Simply follow the instructions in the DBpedia AMI guide which boils down to:

    1. Instantiating a Virtuoso EC2 AMI
    2. Mounting the Amazon Elastic Block Storage (EBS) snapshot that hosts the preloaded Virtuoso Database.

    Enjoy!

    DBpedia 3.6 released

    January 17, 2011 - 2:01 pm by ChrisBizer - No comments »

    Hi all, 

    we are happy to announce the release of DBpedia 3.6. The new release is based on Wikipedia dumps dating from October/November 2010. 

     The new DBpedia dataset describes more than 3.5 million things, of which 1.67 million are classified in a consistent ontology, including 364,000 persons, 462,000 places, 99,000 music albums, 54,000 films, 16,500 video games, 148,000 organizations, 148,000 species and 5,200 diseases.  The DBpedia dataset features labels and abstracts for 3.5 million things in up to 97 different languages; 1,850,000 links to images and 5,900,000 links to external web pages; 6,500,000 external links into other RDF datasets, and 632,000 Wikipedia categories.  

    The dataset consists of 672 million pieces of information (RDF triples) out of which 286 million were extracted from the English edition of Wikipedia and 386 million were extracted from other language editions and links to external datasets.  

    Along with the release of the new datasets, we are happy to announce the initial release of the DBpedia MappingTool (http://mappings.dbpedia.org/index.php/MappingTool): a graphical user interface to support the community in creating and editing mappings as well as the ontology.  

    The new release provides the following improvements and changes compared to the DBpedia 3.5.1 release:  

    1. Improved DBpedia Ontology as well as improved Infobox mappings using http://mappings.dbpedia.org/  

    Furthermore, there are now also mappings in languages other than English. These improvements are largely due to collective work by the community. There are 13.8 million RDF statements based on mappings (11.1 million in version 3.5.1). All this data is in the /ontology/ namespace. Note that this data is of much higher quality than the Raw Infobox data in the /property/ namespace.  

    Statistics of the mappings wiki on the date of release 3.6:  

    Mappings:     

    • English: 315 Infobox mappings (covers 1124 templates including redirects)     
    • Greek: 137 Infobox mappings (covers 192 templates including redirects)     
    • Hungarian: 111 Infobox mappings (covers 151 templates including redirects)     
    • Croatian: 36 Infobox mappings (covers 67 templates including redirects)     
    • German: 9 Infobox mappings
    • Slovenian: 4 Infobox mappings

    Ontology:     

    • 272 classes

    Properties:     

    • 629 object properties     
    • 706 datatype properties (they are all in the /datatype/ namespace)  

    2.  Some commonly used property names changed  

    Please see http://dbpedia.org/ChangeLog and http://dbpedia.org/Datasets/Properties to know which relations changed and update your applications accordingly!  

    3. New Datatypes for increased quality in mapping-based properties  

    • xsd:positiveInteger, xsd:nonNegativeInteger, xsd:nonPositiveInteger, xsd:negativeInteger 

    4. Improved parsing coverage 

    • Parsing of lists of elements in Infobox property values that improves the completeness of extracted facts
    • Method to deal with missing repeated links in Infoboxes that do appear somewhere else on the page.
    • Flag templates are parsed.
    • Various improvements on internationalization.  

    5. Improved recognition of  

    • Wikipedia language codes.
    • Wikipedia namespace identifiers.
    • Category hierarchies.  

    6. Disambiguation links for acronyms (all upper-case title) are now extracted (for example, Kilobyte and Knowledge_base for “KB”):  

    • Wikilinks consisting of multiple words: If the starting letters of the words appear in correct order (with possible gaps) and cover all acronym letters.
    • Wikilinks consisting of a single word: If the case-insensitive longest common subsequence with the acronym is equal to the acronym. 

    7. New ‘Geo-Related’ Extractor

    • Relates articles to resources of countries, whose label appear in the name of the articles’ categories.

    8. Encoding (bugfixes) 

    • The new datasets support the complete range of Unicode code points (up to 0×10ffff). 16-bit code points start with ‘\u’, code points larger than 16-bits start with ‘\U’.
    • Commas and ampersands do not get encoded anymore in URIs. Please see http://dbpedia.org/URIencoding for an explanation regarding the DBpedia URI encoding scheme.  

    9. Extended Datasets 

    • Thanks to Johannes Hoffart (Max-Planck-Institut für Informatik) for contributing links to YAGO2.
    • Freebase links have been updated. They now refer to mids (http://wiki.freebase.com/wiki/Machine_ID) because guids have been deprecated.  

    You can download the new DBpedia dataset from http://dbpedia.org/Downloads36 

    As usual, the dataset is also available as Linked Data and via the DBpedia SPARQL endpoint at http://dbpedia.org/sparql 

    Lots of thanks to:  

    • All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
    • Max Jakob (Freie Universität Berlin, Germany) for improving the DBpedia extraction framework and for extracting the new datasets.
    • Robert Isele and Anja Jentzsch (both Freie Universität Berlin, Germany) for helping Max with their expertise on the extraction framework.
    • Paul Kreis (Freie Universität Berlin, Germany) for analyzing the DBpedia data of the previous release and suggesting ways to increase quality and quantity. Some results of his work were implemented in this release.
    • Dimitris Kontokostas (Aristotle University of Thessaloniki, Greece), Jimmy O’Regan (Eolaistriu Technologies, Ireland), José Paulo Leal (University of Porto, Portugal) for providing patches to improve the extraction framework.
    • Claus Stadler (Universität Leipzig, Germany) for implementing the Geo-Related extractor and extracting its data.
    • Jens Lehmann and Sören Auer (both Universität Leipzig, Germany) for providing the new dataset via the DBpedia download server at Universität Leipzig.
    • Kingsley Idehen and Mitko Iliev (both OpenLink Software) for loading the dataset into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint. OpenLink Software (http://www.openlinksw.com/) altogether for providing the server infrastructure for DBpedia.  

    The work on the new release was financially supported by  

    • Neofonie GmbH, a Berlin-based company offering leading technologies in the area of Web search, social media and mobile applications (http://www.neofonie.de/).
    • The European Commission through the project LOD2 - Creating Knowledge out of Linked Data (http://lod2.eu/).
    • Vulcan Inc. as part of its Project Halo (http://www.projecthalo.com/). Vulcan Inc. creates and advances a variety of world-class endeavors and high impact initiatives that change and improve the way we live, learn, do business (http://www.vulcan.com/). More information about DBpedia is found at http://dbpedia.org/About  

    Have fun with the new dataset!  

    The whole DBpedia team also congratulates Wikipedia to its 10th Birthday which was this weekend!  

    Cheers,  

    Chris Bizer

    Links to DBpedia from Ontos NLP web services

    October 20, 2010 - 10:45 am by Sören - No comments »

    The NLP specialist Ontos extends the quality and amount of information for developers by integrating its news portal into the Linked Data Cloud. Ontos’ GUIDs for objects are now dereferencable - the resulting RDF contains owl:sameAs-attributes to DBpedia, Freebase and others (cf. e.g the entry for Barack Obama).

    Within the news portal Ontos crawls news articles from diverse online sources, uses its cutting-edge NLP technology to extract facts (objects and relations between them), merges these information with existing ones and stores them including respective references to the original news article - all of this fully automatically. Facts from Ontos’ portal are accessible via a RESTful HTTP API. Fetching data is free - in order to receive an API key, developers have to register (e-mail address only!) at Ontos’ homepage.

    For humans Ontos provides a search interface at http://www.ontosearch.com. It allows to look-up objects in the database and viewing respective summaries in HTML or RDF.

    Please note that the generated RDF does currently contain a small part of existing information (e. g. no article references yet). Ontos will extend the respective content step-by-step.

    DBpedia 3.5.1 available on Amazon EC2

    August 10, 2010 - 8:49 am by ChrisBizer - No comments »

    As the Amazon Web Services are getting used a lot for cloud computing, we have started to provide current snapshots of the DBpedia dataset for this environment.

    We provide the DBpedia dataset for Amazon Web Services in two ways:

    1. Source files for being mounted:  http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2319

    2. Virtuoso SPARQL store for being instanciated: http://www.openlinksw.com/dataspace/dav/wiki/Main/VirtAWSDBpedia351C 

    DBpedia 3.5.1 released

    April 28, 2010 - 7:18 pm by AnjaJentzsch - No comments »

    Hi all,

    we are happy to announce the release of DBpedia 3.5.1.

    This is primarily a bugfix release, which is based on Wikipedia dumps dating from March 2010. Thanks to the great community feedback about the previous DBpedia release, we were able to resolve the reported issues as well as to improve template to ontology mappings.

    The new release provides the following improvements and changes compared to the DBpedia 3.5 release:

    1. Some abstracts contained unwanted WikiText markup. The detection of infoboxes and tables has been improved, so that even most pages with syntax errors have clean abstracts now.
    2. In 3.5 there has been an issue detecting interlanguage links, which led to some non-english statements having the wrong subject. This has been fixed.
    3. Image references to dummy images (e.g. http://en.wikipedia.org/wiki/Image:Replace_this_image.svg) have been removed.
    4. DBpedia 3.5.1 uses a stricter IRI validation now. Care has been taken to only discard URIs from Wikipedia, which are clearly invalid.
    5. Recognition of disambiguation pages has been improved, increasing the size from 247,000 to 769,000 triples.
    6. More geographic coordinates are extracted now, increasing its number from 1,200,000 to 1,500,000 in the english version.
    7. For this release, all Freebase links have been regenerated from the most recent freebase dump.

    You can download the new DBpedia dataset from http://wiki.dbpedia.org/Downloads351. As usual, the data set is also available as Linked Data and via the DBpedia SPARQL endpoint.

    Lots of thanks to:

    • Jens Lehmann and Sören Auer (both Universität Leipzig) for providing the knowledge base via the DBpedia download server at Universität Leipzig.
    • Kingsley Idehen and Mitko Iliev (both OpenLink Software) for loading the knowledge base into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint.

    The whole DBpedia team is very thankful to three companies which enabled us to do all this by supporting and sponsoring the DBpedia project:

    • Neofonie GmbH (http://www.neofonie.de), a Berlin-based company offering leading technologies in the area of Web search, social media and mobile applications.
    • Vulcan Inc. as part of its Project Halo (http://www.projecthalo.com). Vulcan Inc. creates and advances a variety of world-class endeavors and high impact initiatives that change and improve the way we live, learn, do business (http://www.vulcan.com).
    • OpenLink Software (http://www.openlinksw.com). OpenLink Software develops the Virtuoso Universal Server, an innovative enterprise grade server that cost-effectively delivers an unrivaled platform for Data Access, Integration and Management.

    More information about DBpedia is found at http://dbpedia.org/About

    Have fun with the new DBpedia knowledge base!

    Cheers,

    Robert Isele and Anja Jentzsch

    DBpedia 3.5 released

    April 12, 2010 - 11:28 am by ChrisBizer - No comments »

    Hi all,

    we are happy to announce the release of DBpedia 3.5. The new release is based on Wikipedia dumps dating from March 2010. Compared to the 3.4 release, we were able to increase the quality of the DBpedia knowledge base by employing a new data extraction framework which applies various data cleansing heuristics as well as by extending the infobox-to-ontology mappings that guide the data extraction process.

    The new DBpedia knowledge base describes more than 3.4 million things, out of which 1.47 million are classified in a consistent ontology, including 312,000 persons, 413,000 places, 94,000 music albums, 49,000 films, 15,000 video games, 140,000 organizations, 146,000 species and 4,600 diseases. The DBpedia data set features labels and abstracts for these 3.2 million things in up to 92 different languages; 1,460,000 links to images and 5,543,000 links to external web pages; 4,887,000 external links into other RDF datasets, 565,000 Wikipedia categories, and 75,000 YAGO categories. The DBpedia knowledge base altogether consists of over 1 billion pieces of information (RDF triples) out of which 257 million were extracted from the English edition of Wikipedia and 766 million were extracted from other language editions.

    The new release provides the following improvements and changes compared to the DBpedia 3.4 release:

    1. The DBpedia extraction framework has been completely rewritten in Scala. The new framework dramatically reduces the extraction time of a single Wikipedia article from over 200 to about 13 milliseconds. All features of the previous PHP framework have been ported. In addition, the new framework can extract data from Wikipedia tables based on table-to-ontology mappings and is able to extract multiple infoboxes out of a single Wikipedia article. The data from each infobox is represented as a separate RDF resource. All resources that are extracted from a single page can be connected using custom RDF properties which are also defined in the mappings. A lot of work also went into the value parsers and the DBpedia 3.5 dataset should therefore be much cleaner than its predecessors. In addition, units of measurement are normalized to their respective SI unit, which makes querying DBpedia easier.
    2. The mapping language that is used to map Wikipedia infoboxes to the DBpedia Ontology has been redesigned. The documentation of the new mapping language is found at http://dbpedia.svn.sourceforge.net/viewvc/dbpedia/trunk/extraction/core/doc/mapping%20language/
    3. In order to enable the DBpedia user community to extend and refine the infobox to ontology mappings, the mappings can be edited on the newly created wiki hosted on http://mappings.dbpedia.org.  At the moment, 303 template mappings are defined, which cover (including redirects) 1055 templates. On the wiki, the DBpedia Ontology can be edited by the community as well. At the moment, the ontology consists of 259 classes and about 1,200 properties. 
    4. The ontology properties extracted from infoboxes are now split into two data sets (For details see: http://wiki.dbpedia.org/Datasets):  1. The Ontology Infobox Properties dataset contains the properties as they are defined in the ontology (e.g. length). The range of a property is either an xsd schema type or a dimension of measurement, in which case the value is normalized to the respective SI unit. 2. The Ontology Infobox Properties (Specific) dataset contains properties which have been specialized for a specific class using a specific unit. e.g. the property height is specialized on the class Person using the unit centimeters instead of meters.
    5. The framework now resolves template redirects, making it possible to cover all redirects to an infobox on Wikipedia with a single mapping. 
    6. Three new extractors have been implemented:  1. PageIdExtractor extracting Wikipedia page IDs are extracted for each page. 2. RevisionExtractor extracting the latest revision of a page. 3. PNDExtractor extracting PND (Personnamendatei) identifiers.
    7. The data set now provides labels, abstracts, page links and infobox data in 92 different languages, which have been extracted from recent Wikipedia dumps as of March 2010.
    8. In addition the N-Triples datasets, N-Quads datasets are provided which include a provenance URI to each statement. The provenance URI denotes the origin of the extracted triple in Wikipedia (For details see: http://wiki.dbpedia.org/Datasets).You can download the new DBpedia dataset from http://wiki.dbpedia.org/Downloads35. As usual, the data set is also available as Linked Data and via the DBpedia SPARQL endpoint.

    Lots of thanks to:

    • Robert Isele, Anja Jentzsch, Christopher Sahnwaldt, and Paul Kreis (all Freie Universität Berlin) for reimplementing the DBpedia extraction framework in Scala, for extending the infobox-to-ontology mappings and for extracting the new DBpedia 3.5 knowledge base. 
    • Jens Lehmann and Sören Auer (both Universität Leipzig) for providing the knowledge base via the DBpedia download server at Universität Leipzig.
    • Kingsley Idehen and Mitko Iliev (both OpenLink Software) for loading the knowledge base into the Virtuoso instance that serves the Linked Data view and SPARQL endpoint.

    The whole DBpedia team is very thankful to three companies which enabled us to do all this by supporting and sponsoring the DBpedia project:

    1. Neofonie GmbH (http://www.neofonie.de/index.jsp), a Berlin-based company offering leading technologies in the area of Web search, social media and mobile applications.
    2. Vulcan Inc. as part of its Project Halo (www.projecthalo.com). Vulcan Inc. creates and advances a variety of world-class endeavors and high impact initiatives that change and improve the way we live, learn, do business (http://www.vulcan.com/).
    3. OpenLink Software (http://www.openlinksw.com/). OpenLink Software develops the Virtuoso Universal Server, an innovative enterprise grade server that cost-effectively delivers an unrivaled platform for Data Access, Integration and Management.

    More information about DBpedia is found at http://dbpedia.org/About

    Have fun with the new DBpedia knowledge base!

    Cheers

    Chris Bizer

    OPEN POSITION: Move to Berlin, work on DBpedia (1 year full-time contract)

    March 29, 2010 - 1:54 pm by ChrisBizer - No comments »

    Hi all, 

    the DBpedia Team at Freie Universität Berlin is looking for a developer/researcher who wants to contribute to the further development of the DBpedia information extraction framework, investigate approaches to annotate free-text with DBpedia URIs and participate in the various Linked Data efforts currently advanced by our team. 

    Candidates should have

    • good programming skills in Java, in addition Scala and PHP are helpful.
    • a university degree preferably in computer science or information systems.  Previous knowledge of Semantic Web Technologies (RDF, SPARQL, Linked Data) and experience with information extraction and/or named entity recognition techniques are a plus. 

    Contract start date: 15 May 2010
    Duration: 1 year
    Salary: around 40.000 Euro/year (German BAT IIa) 

    You will be part of an innovative and cordial team and enjoy flexible work hours. After the year, chances are high that you will be able to choose between longer-term positions at Freie Universität Berlin and at Neofonie.  Please contact chris@bizer.de via email until 15 April 2010 for additional details and include information about your skills and experience into your mail. 

    The whole DBpedia team is very thankful to neofonie GmbH for contributing to the development of the DBpedia project by financing this position. neofonie is a Berlin-based company offering leading technologies in the area of Web search, social media and mobile applications. 

    Cheers, 

    Chris  

     
    Prof. Dr. Christian Bizer
    Web-based Systems Group
    Freie Universität Berlin
    +49 30 838 55509
    http://www.bizer.de
    chris@bizer.de

    Invitation to contribute to DBpedia by improving the infobox mappings + New Scala-based Extraction Framework

    March 12, 2010 - 1:07 pm by ChrisBizer - No comments »

    Hi all,

    in order to extract high quality data from Wikipedia, the DBpedia extraction framework relies on infobox to ontology mappings which define how Wikipedia infobox templates are mapped to classes of the DBpedia ontology.

    Up to now, these mappings were defined only by the DBpedia team and as Wikipedia is huge and contains lots of different infobox templates, we were only able to define mappings for a small subset of all Wikipedia infoboxes and also only managed to map a subset of the properties of these infoboxes.

    In order to enable the DBpedia user community to contribute to improving the coverage and the quality of the mappings, we have set up a public wiki at http://mappings.dbpedia.org/index.php/Main_Page which contains:

    1.  all mappings that are currently used by the DBpedia extraction framework
    2. the definition of the DBpedia ontology and
    3. documentation for the DBpedia mapping language as well as step-by-step guides on how to extend and refine mappings and the ontology.

    So if you are using DBpedia data and you you were always annoyed that DBpedia did not properly cover the infobox template that is most important to you, you are highly invited to extend the mappings and the ontology in the wiki. Your edits will be used for the next DBpedia release expected to be published in the first week of April.

    The process of contributing to the ontology and the mappings is as follows:

    1.  You familiarize yourself with the DBpedia mapping language by reading the documentation in the wiki.
    2.  In order to prevent random SPAM, the wiki is read-only and new editors need to be confirmed by a member of the DBpedia team (currently Anja Jentzsch does the clearing). Therefore, please create an account in the wiki for yourself. After this, Anja will give you editing rights and you can edit the mappings as well as the ontology.
    3. For contributing to the next DBpedia relase, you can edit until Sunday, March 21. After this, we will check the mappings and the ontology definition in the Wiki for consistency and then use both for the next DBpedia release.

    So, we are starting kind of a social experiment on if the DBpedia user community is willing to contribute to the improvement of DBpedia and on how the DBpedia ontology develops through community contributions J

    Please excuse, that it is currently still rather cumbersome to edit the mappings and the ontology. We are currently working on a visual editor for the mappings as well as a validation service, which will check edits to the mappings and test the new mappings against example pages from Wikipedia. We hope that we will be able to deploy these tools in the next two months, but still wanted to release the wiki as early as possible in order to already allow community contributions to the DBpedia 3.5 release.

    If you have questions about the wiki and the mapping language, please ask them on the DBpedia mailing list where Anja and Robert will answer them.

    What else is happening around DBpedia?

    In order to speed up the data extraction process and to lay a solid foundation for the DBpedia Live extraction, we have ported the DBpedia extraction framework from PHP to Scala/Java. The new framework extracts exactly the same types of data from Wikipedia as the old framework, but processes a single page now in 13 milliseconds instead of the 200 milliseconds. In addition, the new framework can extract data from tables within articles and can handle multiple infobox templates per article. The new framework is available under GPL license in the DBpedia SVN and is documented at http://wiki.dbpedia.org/Documentation.

    The whole DBpedia team is very thankful to two companies which enabled us to do all this by sponsoring the DBpedia project:

    1. Vulcan Inc. as part of its Project Halo (www.projecthalo.com). Vulcan Inc. creates and advances a variety of world-class endeavors and high impact initiatives that change and improve the way we live, learn, do business (http://www.vulcan.com/).
    2.  Neofonie GmbH, a Berlin-based company offering leading technologies in the area of Web search, social media and mobile applications (http://www.neofonie.de/index.jsp).

    Thank you a lot for your support!

    I personally would also like to thank:

    1.  Anja Jentzsch, Robert Isele, and Christopher Sahnwaldt for all their great work on implementing the new extraction framework and for setting up the mapping wiki.
    2.  Andreas Lange and Sidney Bofah for correcting and extending the mappings in the Wiki.

    Cheers,

    Chris Bizer