Before getting into the technical details of, did you know the term Chaudron derives from Old French and denotes a large metal cooking pot? The word was used as an alternative form of chawdron which means entrails. Entrails and cauldron – a combo that seems quite fitting with Halloween coming along.
And now for something completely different
To begin with, Chaudron is a dataset of more than two million triples. It complements DBpedia with physical measures. The triples are automatically extracted from Wikipedia infoboxes using a pattern-matching and a formal grammar approaches. This dataset adds triples to the existing DBpedia resources. Additionally, it includes measures on various resources such as chemical elements, railway, people places, aircrafts, dams and many other types of resources.
Chaudron was published on wiki.dbpedia.org and is one of many other projects and applications featuring DBpedia.
Want to find out more about our DBpedia Applications? Why not read about the DBpedia Chatbot, DBpedia Entity or the NLI-Go DBpedia Demo.?
Happy reading & happy Halloween!
Yours DBpedia Association
PS: In case you want your DBpedia tool, demo or any kind of application published on our Website and the DBpedia Blog, fill out this form and submit your information.
Powered by WPeMatico
This year’s GSoC is slowly coming to an end with final evaluations already being submitted. In order to bridge the waiting time until final results are published, we like to draw your attention to a former project and great tool that was developed during last years’ GSoC.
Meet the DBpedia Chatbot.
DBpedia Chatbot is a conversational Chatbot for DBpedia which is accessible through the following platforms:
- A Web Interface
- Facebook Messenger
The bot is capable of responding to users in the form of simple short text messages or through more elaborate interactive messages. Users can communicate or respond to the bot through text and also through interactions (such as clicking on buttons/links). There are 4 main purposes for the bot. They are:
- Answering factual questions
- Answering questions related to DBpedia
- Expose the research work being done in DBpedia as product features
- Casual conversation/banter
The bot tries to answer text-based questions of the following types:
Natural Language Questions
- Give me the capital of Germany
- Who is Obama?
- Where is the Eiffel Tower?
- Where is France’s capital?
Users can ask the bot to check if vital DBpedia services are operational.
- Is DBpedia down?
- Is lookup online?
Users can ask basic information about specific DBpedia local chapters.
- DBpedia Arabic
- German DBpedia
These are predominantly questions related to DBpedia for which the bot provides predefined templatized answers. Some examples include:
- What is DBpedia?
- How can I contribute?
- Where can I find the mapping tool?
Messages which are casual in nature fall under this category. For example:
- What is your name?
if you like to have a closer look at the internal processes and how the chatbot was developed, check out the DBpedia GitHub pages.
DBpedia Chatbot was published on wiki.dbpedia.org and is one of many other projects and applications featuring DBpedia.
Powered by WPeMatico
In case you want your DBpedia based tool or demo to publish on our website just follow the link and submit your information, we will do the rest.
Today we are featuring DBpedia Entity, in our blog series of introducting interesting DBpedia applications and tools to the DBpedia community and beyond. Read on and enjoy.
DBpedia-Entity is a standard test collection for entity search over the DBpedia knowledge base. It is meant for evaluating retrieval systems that return a ranked list of entities (DBpedia URIs) in response to a free text user query.
The first version of the collection (DBpedia-Entity v1) was released in 2013, based on DBpedia v3.7 . It was created by assembling search queries from a number of entity-oriented benchmarking campaigns and mapping relevant results to DBpedia. An updated version of the collection, DBpedia-Entity v2, has been released in 2017, as a result of a collaborative effort between the IAI group of the University of Stavanger, the Norwegian University of Science and Technology, Wayne State University, and Carnegie Mellon University . It has been published at the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’17), where it received a Best Short Paper Honorable Mention Award. See the paper and poster.
DBpedia Entity was published on wiki.dbpedia.org and is one of many other projects and applications featuring DBpedia.
Powered by WPeMatico
A small demo app for a generic natural language interaction library I am developing: NLI-GO
. It allows you to ask a few questions in natural language (English). These questions are answered by DBPedia via Sparql queries.
NLI-GO DBPedia demo was published on wiki.dbpedia.org and is one of many other projects and applications featuring DBpedia.
Powered by WPeMatico
We are pleased to announce the official release of DBpedia Live. The main objective of DBpedia is to extract structured information from Wikipedia, convert it into RDF, and make it freely available on the Web. In a nutshell, DBpedia is the Semantic Web mirror of Wikipedia.
Wikipedia users constantly revise Wikipedia articles with updates happening almost each second. Hence, data stored in the official DBpedia endpoint can quickly become outdated, and Wikipedia articles need to be re-extracted. DBpedia Live enables such a continuous synchronization between DBpedia and Wikipedia.
The DBpedia Live framework has the following new features:
- Migration from the previous PHP framework to the new Java/Scala DBpedia framework.
- Support of clean abstract extraction.
- Automatic reprocessing of all pages affected by a schema mapping change at http://mappings.dbpedia.org.
- Automatic reprocessing of pages that are not changed for more than one month. The main objective of that feature is to that any change in the DBpedia framework, e.g. addition/change of an extractor, will eventually affect all extracted resources. It also serves as fallback for technical problems in Wikipedia or the update stream.
- Publication of all changesets.
- Provision of a tool to enable other DBpedia mirrors to be in synchronization with our DBpedia Live endpoint. The tool continuously downloads changesets and performs changes in a specified triple store accordingly.
Thanks a lot to Mohamed Morsey, who implemented this version of DBpedia Live as well as to Sebastian Hellmann and Claus Stadler who worked on its predecessor. We also thank our partners at the FU Berlin and OpenLink as well as the LOD2 project for their support.
European public bodies produce thousands upon thousands of datasets every year – about everything from how our tax money is spent to the quality of the air we breathe.
The Opendata competition aims to challenge designers, developers, journalists, researchers and the general public to come up with something useful, valuable or interesting using open public data.
There are four main strands to the competition:
Ideas – Anyone can suggest an idea for projects which reuse public information to do something interesting or useful.
Apps – Teams of developers can submit working applications which reuse public information.
Visualisations – Designers, artists and others can submit interesting or insightful visual representations of public information.
Datasets – We encourage the submission of any form of open datasets produced by public governmental bodies, either submitted directly by the public body or by developers or others who have transformed, cleaned or interlinked the data.
The competition is open til 5th June midnight. The winners will be selected by an all star cast of open data gurus – and announced in mid June at the European Digital Assembly in Brussels. More information can be found at: http://opendatachallenge.org/
The NLP specialist Ontos extends the quality and amount of information for developers by integrating its news portal into the Linked Data Cloud. Ontos’ GUIDs for objects are now dereferencable – the resulting RDF contains owl:sameAs-attributes to DBpedia, Freebase and others (cf. e.g the entry for Barack Obama).
Within the news portal Ontos crawls news articles from diverse online sources, uses its cutting-edge NLP technology to extract facts (objects and relations between them), merges these information with existing ones and stores them including respective references to the original news article – all of this fully automatically. Facts from Ontos’ portal are accessible via a RESTful HTTP API. Fetching data is free – in order to receive an API key, developers have to register (e-mail address only!) at Ontos’ homepage.
For humans Ontos provides a search interface at http://www.ontosearch.com. It allows to look-up objects in the database and viewing respective summaries in HTML or RDF.
Please note that the generated RDF does currently contain a small part of existing information (e. g. no article references yet). Ontos will extend the respective content step-by-step.
OKCon, now in its fifth year, is the interdisciplinary conference that brings together individuals from across the open knowledge spectrum (such as also DBpedia in particular and Linked Open Data in general) for a day of presentations and workshops.Open knowledge promises significant social and economic benefits in a wide range of areas from governance to science, culture to technology. Opening up access to content and data can radically increase access and reuse, improving transparency, fostering innovation and increasing societal welfare.
In addition to high profile initiatives such as Wikipedia, OpenStreetMap and the Human Genome Project, there is enormous growth among open knowledge projects and communities at all levels. Moreover, in the last year, many governments across the world have begun opening up their data.
And it doesn’t stop there. In academia, open access to both publications and data has been gathering momentum, and similar calls to open up learning materials have been heard in education. Furthermore, this gathering flood of open data and content is the creator and driver of massive technological change. How can we make this data available, how can we connect it together, how can we use it collaborate and share our work?
- where: London, UK
- when: Saturday 24th April, 2010
- www: http://www.okfn.org/okcon/
- cfp: http://www.okfn.org/okcon/cfp/ (deadline: Jan 31st 2010)
- hashtag: #okcon2010
The new year is slowly approaching and people start compiling their top x lists of 2009, with x usually ranging between 10 and 365. 😉
The popular Web technology blog ReadWriteWeb has chosen x with value 10 and picked DBpedia as one of their top Semantic Web products of 2009. Its actually the only non-commercial community project in the list and in good company with products such as Google’s Search Options and Rich Snippets, Apperture and Data.gov. Other picks, which btw. heavily use or link to DBpedia, include OpenCalais, Freebase, BBC Music and Zemanta.
Read the full article at http://www.readwriteweb.com/archives/top_10_semantic_web_products_of_2009.php
Kingsley announced on Tuesday that the first of data sets from the LOD community including DBpedia have been uploaded to the Amazon’s public data set hosting facility. Thus you can now do the following:
- Download DBpedia data from Amazon’s hosting facility at no cost to your own data center and then build your own personal or service specific edition of DBpedia
- Download to an EC2 AMI and build yourself using Virtuoso or any other Quad / Triple Store
- Use the DBpedia EC2 AMI which we provide (which will produce a rendition in 1.5 hrs)
We especially thank our colleagues and new Linked Data supporters at both Amazon Web Services and Infochimps.org for their assistance re. getting this very taxing process in motion.