Today we are featuring DBpedia Entity, in our blog series of introducting interesting DBpedia applications and tools to the DBpedia community and beyond. Read on and enjoy.
DBpedia-Entity is a standard test collection for entity search over the DBpedia knowledge base. It is meant for evaluating retrieval systems that return a ranked list of entities (DBpedia URIs) in response to a free text user query.
The first version of the collection (DBpedia-Entity v1) was released in 2013, based on DBpedia v3.7 . It was created by assembling search queries from a number of entity-oriented benchmarking campaigns and mapping relevant results to DBpedia. An updated version of the collection, DBpedia-Entity v2, has been released in 2017, as a result of a collaborative effort between the IAI group of the University of Stavanger, the Norwegian University of Science and Technology, Wayne State University, and Carnegie Mellon University . It has been published at the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’17), where it received a Best Short Paper Honorable Mention Award. See the paper and poster.
DBpedia Entity was published on wiki.dbpedia.org and is one of many other projects and applications featuring DBpedia.
Powered by WPeMatico
A small demo app for a generic natural language interaction library I am developing: NLI-GO
. It allows you to ask a few questions in natural language (English). These questions are answered by DBPedia via Sparql queries.
NLI-GO DBPedia demo was published on wiki.dbpedia.org and is one of many other projects and applications featuring DBpedia.
Powered by WPeMatico
We are pleased to announce the official release of DBpedia Live. The main objective of DBpedia is to extract structured information from Wikipedia, convert it into RDF, and make it freely available on the Web. In a nutshell, DBpedia is the Semantic Web mirror of Wikipedia.
Wikipedia users constantly revise Wikipedia articles with updates happening almost each second. Hence, data stored in the official DBpedia endpoint can quickly become outdated, and Wikipedia articles need to be re-extracted. DBpedia Live enables such a continuous synchronization between DBpedia and Wikipedia.
The DBpedia Live framework has the following new features:
- Migration from the previous PHP framework to the new Java/Scala DBpedia framework.
- Support of clean abstract extraction.
- Automatic reprocessing of all pages affected by a schema mapping change at http://mappings.dbpedia.org.
- Automatic reprocessing of pages that are not changed for more than one month. The main objective of that feature is to that any change in the DBpedia framework, e.g. addition/change of an extractor, will eventually affect all extracted resources. It also serves as fallback for technical problems in Wikipedia or the update stream.
- Publication of all changesets.
- Provision of a tool to enable other DBpedia mirrors to be in synchronization with our DBpedia Live endpoint. The tool continuously downloads changesets and performs changes in a specified triple store accordingly.
Thanks a lot to Mohamed Morsey, who implemented this version of DBpedia Live as well as to Sebastian Hellmann and Claus Stadler who worked on its predecessor. We also thank our partners at the FU Berlin and OpenLink as well as the LOD2 project for their support.
European public bodies produce thousands upon thousands of datasets every year – about everything from how our tax money is spent to the quality of the air we breathe.
The Opendata competition aims to challenge designers, developers, journalists, researchers and the general public to come up with something useful, valuable or interesting using open public data.
There are four main strands to the competition:
Ideas – Anyone can suggest an idea for projects which reuse public information to do something interesting or useful.
Apps – Teams of developers can submit working applications which reuse public information.
Visualisations – Designers, artists and others can submit interesting or insightful visual representations of public information.
Datasets – We encourage the submission of any form of open datasets produced by public governmental bodies, either submitted directly by the public body or by developers or others who have transformed, cleaned or interlinked the data.
The competition is open til 5th June midnight. The winners will be selected by an all star cast of open data gurus – and announced in mid June at the European Digital Assembly in Brussels. More information can be found at: http://opendatachallenge.org/
The NLP specialist Ontos extends the quality and amount of information for developers by integrating its news portal into the Linked Data Cloud. Ontos’ GUIDs for objects are now dereferencable – the resulting RDF contains owl:sameAs-attributes to DBpedia, Freebase and others (cf. e.g the entry for Barack Obama).
Within the news portal Ontos crawls news articles from diverse online sources, uses its cutting-edge NLP technology to extract facts (objects and relations between them), merges these information with existing ones and stores them including respective references to the original news article – all of this fully automatically. Facts from Ontos’ portal are accessible via a RESTful HTTP API. Fetching data is free – in order to receive an API key, developers have to register (e-mail address only!) at Ontos’ homepage.
For humans Ontos provides a search interface at http://www.ontosearch.com. It allows to look-up objects in the database and viewing respective summaries in HTML or RDF.
Please note that the generated RDF does currently contain a small part of existing information (e. g. no article references yet). Ontos will extend the respective content step-by-step.
OKCon, now in its fifth year, is the interdisciplinary conference that brings together individuals from across the open knowledge spectrum (such as also DBpedia in particular and Linked Open Data in general) for a day of presentations and workshops.Open knowledge promises significant social and economic benefits in a wide range of areas from governance to science, culture to technology. Opening up access to content and data can radically increase access and reuse, improving transparency, fostering innovation and increasing societal welfare.
In addition to high profile initiatives such as Wikipedia, OpenStreetMap and the Human Genome Project, there is enormous growth among open knowledge projects and communities at all levels. Moreover, in the last year, many governments across the world have begun opening up their data.
And it doesn’t stop there. In academia, open access to both publications and data has been gathering momentum, and similar calls to open up learning materials have been heard in education. Furthermore, this gathering flood of open data and content is the creator and driver of massive technological change. How can we make this data available, how can we connect it together, how can we use it collaborate and share our work?
- where: London, UK
- when: Saturday 24th April, 2010
- www: http://www.okfn.org/okcon/
- cfp: http://www.okfn.org/okcon/cfp/ (deadline: Jan 31st 2010)
- hashtag: #okcon2010
The new year is slowly approaching and people start compiling their top x lists of 2009, with x usually ranging between 10 and 365. 😉
The popular Web technology blog ReadWriteWeb has chosen x with value 10 and picked DBpedia as one of their top Semantic Web products of 2009. Its actually the only non-commercial community project in the list and in good company with products such as Google’s Search Options and Rich Snippets, Apperture and Data.gov. Other picks, which btw. heavily use or link to DBpedia, include OpenCalais, Freebase, BBC Music and Zemanta.
Read the full article at http://www.readwriteweb.com/archives/top_10_semantic_web_products_of_2009.php
Kingsley announced on Tuesday that the first of data sets from the LOD community including DBpedia have been uploaded to the Amazon’s public data set hosting facility. Thus you can now do the following:
- Download DBpedia data from Amazon’s hosting facility at no cost to your own data center and then build your own personal or service specific edition of DBpedia
- Download to an EC2 AMI and build yourself using Virtuoso or any other Quad / Triple Store
- Use the DBpedia EC2 AMI which we provide (which will produce a rendition in 1.5 hrs)
We especially thank our colleagues and new Linked Data supporters at both Amazon Web Services and Infochimps.org for their assistance re. getting this very taxing process in motion.
Together with this years I-Semantics conference we are organizing a Linking Open Data Triplification Challenge.
The challenge aims at expediting the process of revealing and exposing structured representations, as does the DBpedia project for Wikipedia. Structured (relational) representations already back most of the existing Web sites. In addition to revealing these the challenge also aims at raising awareness in the Web Developer community and showcasing best practices.
The challenge awards attractive prices (MacBook Air, EeePC, iPod) to the most innovative and promising semantifications. The prizes are kindly sponsored by OpenLink Software, Punkt.NetServices and InfAI.
More Information about the challenge can be found at:
Outreach to the Web developer communities (as intended with the challenge) is really crucial right now to expedite the Semantic Web deployment and we would be very excited if you support this effort – e.g. by spreading the word and/or submitting to the challenge.
DBpedia exposes semantics extracted from one of the largest information sources on the Web. But one of the nice things about the Web is the variety and wealth of content (including your Blog, Wiki, CMS or other WebApp). In order to make this large variety of small Websites better mashable and bring them on the Semantic Web the makers of DBpedia released technologies, which dramatically simplify the “semantification” of your Websites. Please check out Triplify (a generic plugin for Webapps with preconfigurations for Drupal, WordPress, WackoWiki), D2RQ (a Java software for mapping and serving relational DB content for the Semantic Web) and Virtuoso (a comprehensive DB, knowledge store infrastructure).