Category Archives: Association

One Billion derived Knowledge Graphs

… by and for Consumers until 2025

One Billion – what a mission! We are proud to announce that the DBpedia Databus website at https://databus.dbpedia.org and the SPARQL API at https://databus.dbpedia.org/(repo/sparql|yasgui) (docu) are in public beta now!

The system is usable (eat-your-own-dog-food tested) following a “working software over comprehensive documentation” approach. Due to its many components (website, SPARQL endpoints, keycloak, mods, upload client, download client, and data debugging), we estimate approximately six months in beta to fix bugs, implement all features and improve the details.

But, let’s start from the beginning

The DBpedia Databus is a platform to capture invested effort by data consumers who needed better data quality (fitness for use) in order to use the data and give improvements back to the data source and other consumers. DBpedia Databus enables anybody to build an automated DBpedia-style extraction, mapping and testing for any data they need. Databus incorporates features from DNS, Git, RSS, online forums and Maven to harness the full work power of data consumers. Vision

Our vision

Professional consumers of data worldwide have already built stable cleaning and refinement chains for all available datasets, but their efforts are invisible and not reusable. Deep, cleaned data silos exist beyond the reach of publishers and other consumers trapped locally in pipelines. Data is not oil that flows out of inflexible pipelines. Databus breaks existing pipelines into individual components that together form a decentralized, but centrally coordinated data network. In this set-up, data can flow back to previous components, the original sources, or end up being consumed by external components.

One Billion interconnected, quality-controlled Knowledge Graphs until 2025

The Databus provides a platform for re-publishing these files with very little effort (leaving file traffic as only cost factor) while offering the full benefits of built-in system features such as automated publication, structured querying, automatic ingestion, as well as pluggable automated analysis, data testing via continuous integration, and automated application deployment (software with data). The impact is highly synergistic. Just a few thousand professional consumers and research projects can expose millions of cleaned datasets, which are on par with what has long existed in deep silos and pipelines.

To a data consumer network

As we are inverting the paradigm form a publisher-centric view to a data consumer network, we will open the download valve to enable discovery and access to massive amounts of cleaner data than published by the original source. The main DBpedia Knowledge Graph alone has 600k file downloads per year complemented by downloads at over 20 chapters, e.g. http://es.dbpedia.org as well as over 8 million daily hits on the main Virtuoso endpoint.

Community extension from the alpha phase such as DBkWik, LinkedHypernyms are being loaded onto the bus and consolidated. We expect this number to reach over 100 by the end of the year. Companies and organisations who have previously uploaded their backlinks here will be able to migrate to the databus. Other datasets are cleaned and posted. In two of our research projects LOD-GEOSS and PLASS, we will re-publish open datasets, clean them and create collections, which will result in DBpedia-style knowledge graphs for energy systems and supply-chain management.

A new era for decentralized collaboration on data quality

DBpedia was established around producing a queryable knowledge graph derived from Wikipedia content that’s able to answer questions like “What have Innsbruck and Leipzig in common?” A community and consumer network quickly formed around this highly useful data, resulting in a large, well-structured, open knowledge graph that seeded the Linked Open Data Cloud — which is the largest knowledge graph on earth. The main lesson learned after these 13 years is that current data “copy” or “download” processes are inefficient by a magnitude that can only be grasped from a global perspective. Consumers spend tremendous effort fixing errors on the client-side. If one unparseable line needs 15 minutes to find and fix, we are talking about 104 days of work for 10,000 downloads. Providers – on the other hand – will never have the resources to fix the last error as cost increases exponentially (20/80 rule). 

One billion knowledge graphs in mind – the progress so far

Discarding faulty data often means that a substitute source has to be found, which is hours of research and might lead to similar problems. From the dozens of DBpedia Community meetings we held we can summarize that for each clean-up procedure, data transformation, linkset or schema mapping that a consumer creates client-side, dozens of consumers have invested the same effort client-side before him and none of it reaches the source or other consumers with the same problem. Holding the community meetings just showed us the tip of the iceberg. 

As a foundation, we implemented a mappings wiki that allowed consumers to improve data quality centrally. A next advancement was the creation of the SHACL standard by our former CTO and board member Dimitris Kontokostas. SHACL allows consumers to specify repeatable tests on graph structures and datatypes, which is an effective way to systematically assess data quality. We established the DBpedia Databus as a central platform to better capture decentrally created, client-side value by consumers.

It is an open system, therefore value that is captured flows right back to everybody.  

The full document “DBpedia’s Databus and strategic initiative to facilitate “One Billion derived Knowledge Graphs by and for Consumers” until 2025 is available here.  

If you have any feedback or questions, please use the DBpedia Forum, the “report issues” button, or dbpedia@infai.org.

Yours,

DBpedia Association

More than 50 DBpedia enthusiasts joined the Community Meeting in Karlsruhe.

SEMANTiCS is THE leading European conference in the field of semantic technologies and the platform for professionals who make semantic computing work, and understand its benefits and know its limitations.

Since we at DBpedia have a long-standing partnership with Semantics we also joined this year’s event in Karlsruhe. September 12, the last day of the conference was dedicated to the DBpedia community. 

First and foremost, we would like to thank the Institute for Applied Informatics for supporting our community and many thanks to FIZ Karlsruhe for hosting our community meeting.

Following, we will give you a brief retrospective about the presentations.

Opening Session

Katja Hose – “Querying the web of data”

….on the search for the killer App.

The concept of Linked Open Data and the promise of the Web of Data have been around for over a decade now. Yet, the great potential of free access to a broad range of data that these technologies offer has not yet been fully exploited. This talk will, therefore review the current state of the art, highlight the main challenges from a query processing perspective, and sketch potential ways on how to solve them. Slides are available here.

Dan Weitzner – “timbr-DBpedia – Exploration and Query of DBpedia in SQL

The timbr SQL Semantic Knowledge Platform enables the creation of virtual knowledge graphs in SQL. The DBpedia version of timbr supports query of DBpedia in SQL and seamless integration of DBpedia data into data warehouses and data lakes. We already published a detailed blogpost about timbr where you can find all relevant information about this amazing new DBpedia Service.

Showcase Session

Maribel Acosta“A closer look at the changing dynamics of DBpedia mappings”

Her presentation looked at the mappings wiki and how different language chapters use and edit it. Slides are available here.

Mariano Rico“Polishing a diamond: techniques and results to enhance the quality of DBpedia data”

DBpedia is more than a source for creating papers. It is also being used by companies as a remarkable data source. This talk is focused on how we can detect errors and how to improve the data, from the perspective of academic researchers and but also on private companies. We show the case for the Spanish DBpedia (the second DBpedia in size after the English chapter) through a set of techniques, paying attention to results and further work. Slides are available here.

Guillermo Vega-Gorgojo – “Clover Quiz: exploiting DBpedia to create a mobile trivia game”

Clover Quiz is a turn-based multiplayer trivia game for Android devices with more than 200K multiple choice questions (in English and Spanish) about different domains generated out of DBpedia. Questions are created off-line through a data extraction pipeline and a versatile template-based mechanism. A back-end server manages the question set and the associated images, while a mobile app has been developed and released in Google Play. The game is available free of charge and has been downloaded by +10K users, answering more than 1M questions. Therefore, Clover Quiz demonstrates the advantages of semantic technologies for collecting data and automating the generation of multiple-choice questions in a scalable way. Slides are available here.

Fabian Hoppe and Tabea Tiez – “The Return of German DBpedia”

Fabian and Tabea will present the latest news on the German DBpedia chapter as it returns to the language chapter family after an extended offline period. They will talk about the data set, discuss a few challenges along the way and give insights into future perspectives of the German chapter. Slides are available here.

Wlodzimierz Lewoniewski and Krzysztof Węcel  – “References extraction from Wikipedia infoboxes”

In Wikipedia’s infoboxes, some facts have references, which can be useful for checking the reliability of the provided data. We present challenges and methods connected with the metadata extraction of Wikipedia’s sources. We used DBpedia Extraction Framework along with own extensions in Python to provide statistics about citations in 10 language versions. Provided methods can be used to verify and synchronize facts depending on the quality assessment of sources. Slides are available here.

Wlodzimierz Lewoniewski – “References extraction from Wikipedia infoboxes” … He gave insight into the process of extracting references for Wikipedia infoboxes, which we will use in our GFS project.

Afternoon Session

Sebastian Hellmann, Johannes Frey, Marvin Hofer – “The DBpedia Databus – How to build a DBpedia for each of your Use Cases”

The DBpedia Databus is a platform that is intended for data consumers. It will enable users to build an automated DBpedia-style Knowledge Graph for any data they need. The big benefit is that users not only have access to data, but are also encouraged to apply improvements and, therefore, will enhance the data source and benefit other consumers. We want to use this session to officially introduce the Databus, which is currently in beta and demonstrate its power as a central platform that captures decentrally created client-side value by consumers.  

We will give insight on how the new monthly DBpedia releases are built and validated to copy and adapt for your use cases. Slides are available here.

Interactive session, moderator: Sebastian Hellmann – “DBpedia Connect & DBpedia Commerce – Discussing the new Strategy of DBpedia”

In order to keep growing and improving, DBpedia has been undergoing a growth hack for the last couple of months. As part of this process, we developed two new subdivisions of DBpedia: DBpedia Connect and DBpedia Commerce. The former is a low-code platform to interconnect your public or private databus data with the unified, global DBpedia graph and export the interconnected and enriched knowledge graph into your infrastructure. DBpedia Commerce is an access and payment platform to transform Linked Data into a networked data economy. It will allow DBpedia to offer any data, mod, application or service on the market. During this session, we will provide more insight into these as well as an overview of how DBpedia users can best utilize them. Slides are available here.

In case you missed the event, all slides and presentations are also available on our Website. Further insights, feedback and photos about the event are available on Twitter via #DBpediaDay

We are now looking forward to more DBpedia meetings next year. So, stay tuned and check Twitter, Facebook and the Website or subscribe to our Newsletter for the latest news and information.

If you want to organize a DBpedia Community meeting yourself, just get in touch with us via dbpedia@infai.org regarding program and organization.

Yours

DBpedia Association

SEMANTiCS Interview: Dan Weitzner

As the upcoming 14th DBpedia Community Meeting, co-located with SEMANTiCS 2019 in Karlsruhe, Sep 9-12, is drawing nearer, we like to take that opportunity to introduce you to our DBpedia keynote speakers.

Today’s post features an interview with Dan Weitzner from WPSemantix who talks about timbr-DBpedia, which we blogged about recently, as well as future trends and challenges of linked data and the semantic web.

Dan Weitzner is co-founder and Vice President of Research and Development of WPSemantix. He obtained his Bachelor of Science in Computer Science from Florida Atlantic University. In collaboration with DBpedia, he and his colleagues at WPSemantix launched timbr, the first SQL Semantic Knowledge Graph that integrates Wikipedia and Wikidata Knowledge into SQL engines.

Dan Weitzner

1. Can you tell us something about your research focus?

WPSemantix bridges the worlds of standard databases and the Semantic Web by creating ontologies accessible in standard SQL. 

Our platform – timbr is a virtual knowledge graph that maps existing data-sources to abstract concepts, accessible directly in all the popular Business Intelligence (BI) tools and also natively integrated into Apache Spark, R, Python, Java and Scala. 

timbr enables reasoning and inference for complex analytics without the need for costly Extract-Transform-Load (ETL) processes to graph databases.

2. How do you personally contribute to the advancement of semantic technologies?

We believe we have lowered the fundamental barriers to adoption of semantic technologies for large organizations who want to benefit from knowledge graph capabilities without firstly requiring fundamental changes in their database infrastructure and secondly, without requiring expensive organizational changes or significant personnel retraining.  

Additionally, we implemented the W3C Semantic Web principles to enable inference and inheritance between concepts in SQL, and to allow seamless integration of existing ontologies from OWL. Subsequently, users across organizations can do complex analytics using the same tools that they currently use to access and query their databases, and in addition, to facilitate the sophisticated query of big data without requiring highly technical expertise.  
timbr-DBpedia is one example of what can be achieved with our technology. This joint effort with the DBpedia Association allows semantic SQL query of the DBpedia knowledge graph, and the semantic integration of the DBpedia knowledge into data warehouses and data lakes. Finally, timbr-DBpedia allows organizations to benefit from enriching their data with DBpedia knowledge, combining it with machine learning and/or accessing it directly from their favourite BI tools.

3. Which trends and challenges do you see for linked data and the semantic web?

Currently, the use of semantic technologies for data exploration and data integration is a significant trend followed by data-driven communities. It allows companies to leverage the relationship-rich data to find meaningful insights into their data. 

One of the big difficulties for the average developer and business intelligence analyst is the challenge to learn semantic technologies. Another one is to create ontologies that are flexible and easily maintained. We aim to solve both challenges with timbr.

4. Which application areas for semantic technologies do you perceive as most promising?

I think semantic technologies will bloom in applications that require data integration and contextualization for machine learning models.

Ontology-based integration seems very promising by enabling accurate interpretation of data from multiple sources through the explicit definition of terms and relationships – particularly in big data systems,  where ontologies could bring consistency, expressivity and abstraction capabilities to the massive volumes of data.

5. As artificial intelligence becomes more and more important, what is your vision of AI?

I envision knowledge-based business intelligence and contextualized machine learning models. This will be the bedrock of cognitive computing as any analysis will be semantically enriched with human knowledge and statistical models.

This will bring analysts and data scientists to the next level of AI.

6. What are your expectations about Semantics 2019 in Karlsruhe?

I want to share our vision with the semantic community and I would also like to learn about the challenges, vision and expectations of companies and organizations dealing with semantic technologies. I will present “timbr-DBpedia – Exploration and Query of DBpedia in SQL”

The End

Visit SEMANTiCS 2019 in Karlsruhe, Sep 9-12 and find out more about timbr-DBpedia and all the other new developments at DBpedia. Get your tickets for our community meeting here. We are looking forward to meeting you during DBpedia Day.

Yours DBpedia Association

DBpedia Live Restart – Getting Things Done

Part VI of the DBpedia Growth Hack series (View all)

DBpedia Live is a long term core project of DBpedia that immediately extracts fresh triples from all changed Wikipedia articles. After a long hiatus, fresh and live updated data is available once again, thanks to our former co-worker Lena Schindler whose work we feature in this blog post. Before we dive into Lena’s report, let’s have a look at some general info about DBpedia Live:

Live Enterprise Version

OpenLink Software provides a scalable, dedicated, live Virtuoso instance, built on Lena’s remastering. Kingsley Idehen announced the dedicated business service in our new DBpedia forum. .
On the Databus, we collect publicly shared and business-ready dedicated services in the same place where you can download the data. Databus allows you to download the data, build a service, and offer that service, all in one place. Data up-loaders can also see who builds something with their data

Remastering the DBpedia Live Module

Contribution by Lena Schindler

After developing the DBpedia REST API as part of a student project in 2018, I worked as a student Research Assistant for DBpedia. My task was to analyze and patch severe issues in the DBpedia Live instance. I will shortly describe the purpose of DBpedia Live, the reasons it went out of service, what I did to fix these, and finally, the changes needed to support multi-language abstract extraction.


Overview

The DBpedia Extraction Framework is Scala-based software with numerous features that have evolved around extracting knowledge (as RDF) from Wikis. One part is the DBpedia Live module in the “live-deployed” branch, which is intended to provide a continuously updated version of DBpedia by processing Wikipedia pages on demand, immediately after they have been modified by a user. The backbone of this module is a queue that is filled with recently edited Wikipedia pages, combined with a relational database, called Live Cache, that handles the diff between two consecutive versions of a page. The module that fills the queue, called Feeder, needs some kind of connection to a Wiki instance that reports changes to a Wiki Page. The processing then takes place in four steps: 

  1. A wiki page is taken out of the queue. 
  2. Triples are extracted from the page, with a given set of extractors. 
  3. The new triples from the page are compared to the old triples from the Live Cache.
  4. The triple sets that have been deleted and added are published as text files, and the Cache is updated. 

Background

DBpedia Live has been out of service since May 2018, due to the termination of the Wikimedia RCStream Service, upon which the old DBpedia Live Feeder module relied. This socket-based service provided information about changes to an existing Wikimedia instance and was replaced by the EventStreams service, which runs over a single HTTP connection using chunked transfer encoding, and is following the Server-Sent Event (SSE) protocol. It provides a stream of events, each of which contains information about title, id, language, author, and time of every page edit of all Wikimedia instances.

Fix

Starting in September 2018, my first task was to implement a new Feeder for DBpedia Live that is based on this new Wikimedia EventStreams Service. For the Java world, the Akka framework provides an implementation of a SSE client. Akka is a toolkit developed by Lightbend. It simplifies the construction of concurrent and distributed JVM applications, enabling both Java and Scala access. The Akka SSE client and the Akka Streams module are used in the new EventStreamsFeeder (Akka Helper) to extract and process the data stream. I decided to use Scala instead of Java, because it is a more natural fit to Akka. 

After I was able to process events, I had the problem that frequent interruptions in the upstream connection were causing the processing stream to fail. Luckily, Akka provides a fallback mechanism with back-off, similar to the Binary Exponential Backoff of the Ethernet protocol which I could use to restart the stream (called “Graph” in Akka terminology).

Another problem was that in many cases, there were many changes to a page within a short time interval, and if events were processed quickly enough, each change would be processed separately, stressing the Live Instance with unnecessary load. A simple “thread sleep” reduced the number of change-sets being published every hour from thousands to a few hundred.

Multi-language abstracts

The next task was to prepare the Live module for the extraction of abstracts (typically the first paragraph of a page, or the text before the table of contents). The extractors used for this task were re-implemented in 2017. It turned out to be a configuration issue first, and second a candidate for long debugging sessions, fixing issues in the dependencies  between the “live” and “core” modules. Then, in order to allow the extraction of abstracts in multiple languages, the “live” module needed many small changes, at places spread across the code-base, and care had to be taken not to slow down the extraction in the single language case, compared to the performance before the change. Deployment was delayed by an issue with the remote management unit of the production server, but was accomplished by May 2019.

Summary

I also collected my knowledge of the Live module in detailed documentation, addressed to developers who want to contribute to the code. This includes an explanation of the architecture as well as installation instructions. After 400 hours of work, DBpedia Live is alive and kicking, and now supports multi-language abstract extraction. Being responsible for many aspects of Software Engineering, like development, documentation, and deployment, I was able to learn a lot about DBpedia and the Semantic Web, hone new skills in database development and administration, and expand my programming experience using Scala and Akka. 

“Thanks a lot to the whole DBpedia Team who always provided a warm and supportive environment!”

Thank you Lena, it is people like you who help DBpedia improve and develop further, and help to make data networks a reality.

Follow DBpedia on LinkedIn, Twitter or Facebook and stop by the DBpedia Forum to check out the latest discussions.

Yours DBpedia Association

Global Fact Sync – Synchronizing Wikidata & Wikipedia’s infoboxes

How is data edited in Wikipedia/Wikidata? Where does it come from? And how can we synchronize it globally?  

The GlobalFactSync (GFS) Project — funded by the Wikimedia Foundation — started in June 2019 and has two goals:

  • Answer the above-mentioned three questions.
  • Build an information system to synchronize facts between all Wikipedia language-editions and Wikidata. 

Now we are seven weeks into the project (10+ more months to go) and we are releasing our first prototypes to gather feedback. 

How – Synchronization vs Consensus

We follow an absolute “Human(s)-in-the-loop” approach when we talk about synchronization. The final decision whether to synchronize a value or not should rest with a human editor who understands consensus and the implications. There will be no automatic imports. Our focus is to drastically reduce the time to research all references for individual facts.

A trivial example to illustrate our reasoning is the release date of the single “Boys Don’t Cry” (March 16th, 1989) in the English, Japanese, and French Wikipedia, Wikidata and finally in the external open database MusicBrainz.  A human editor might need 15-30 minutes finding and opening all different sources, while our current prototype can spot differences and display them in 5 seconds.

We already had our first successful edit where a Wikipedia editor fixed the discrepancy with our prototype: “I’ve updated Wikidata so that all five sources are in agreement.” We are now working on the following tasks:

  • Scaling the system to all infoboxes, Wikidata and selected external databases (see below on the difficulties there)
  • Making the system:
    •  “live” without stale information
    • “reliable” with less technical errors when extracting and indexing data
    • “better referenced” by not only synchronizing facts but also references 

Contributions and Feedback

To ensure that GlobalFactSync will serve and help the Wikiverse we encourage everyone to try our data and micro-services and leave us some feedback, either on our Meta-Wiki page or via email. In the following 10+ months, we intend to improve and build upon these initial results. At the same time, these microservices are available to every developer to exploit it and hack useful applications. The most promising contributions will be rewarded and receive the book “Engineering Agile Big-Data Systems”. Please post feedback or any tool or GUI here. In case you need changes to be made to the API, please let us know, too.
For the ambitious future developers among you, we have some budget left that we will dedicate to an internship. In order to apply, just mention it in your feedback post. 

Finally, to talk to us and other GlobalfactSync-Users you may want to visit WikidataCon and Wikimania, where we will present the latest developments and the progress of our project. 

Data, APIs & Microservices (Technical prototypes) 

Data Processing and Infobox Extraction

For GlobalFactSync we use data from Wikipedia infoboxes of different languages, as well as Wikidata, and DBpedia and fuse them to receive one big, consolidated dataset – a PreFusion dataset (in JSON-LD). More information on the fusion process, which is the engine behind GFS, can be found in the FlexiFusion paper. One of our next steps is to integrate MusicBrainz into this process as an external dataset. We hope to implement even more such external datasets to increase the amount of available information and references. 

First microservices 

We deployed a set of microservices to show the current state of our toolchain.

  • [Initial User Interface] The GlobalFactSync UI prototype (available at http://global.dbpedia.org) shows all extracted information available for one entity for different sources. It can be used to analyze the factual consensus between different Wikipedia articles for the same thing. Example: Look at the variety of population counts for Grimma.
  • [Reference Data Download] We ran the Reference Extraction Service over 10 Wikipedia languages. Download dumps here.
  • [ID service] Last but not least, we offer the Global ID Resolution Service. It ties together all available identifiers for one thing (i.e. at the moment all DBpedia/Wikipedia and Wikidata identifiers – MusicBrainz coming soon…) and shows their stable DBpedia Global ID. 

Finding sync targets

In order to test out our algorithms, we started by looking at various groups of subjects, our so-called sync targets. Based on the different subjects a set of problems were identified with varying layers of complexity:

  • identity check/check for ambiguity — Are we talking about the same entity? 
  • fixed vs. varying property — Some properties vary depending on nationality (e.g., release dates), or point in time (e.g., population count).
  • reference — Depending on the entity’s identity check and the property’s fixed or varying state the reference might vary. Also, for some targets, no query-able online reference might be available.
  • normalization/conversion of values — Depending on language/nationality of the article properties can have varying units (e.g., currency, metric vs imperial system).

The check for ambiguity is the most crucial step to ensure that the infoboxes that are being compared do refer to the same entity. We found, instances where the Wikipedia page and the infobox shown on that page were presenting information about different subjects (e.g., see here).

Examples

As a good sync target to start with the group ‘NBA players’ was identified. There are no ambiguity issues, it is a clearly defined group of persons, and the amount of varying properties is very limited. Information seems to be derived from mainly two web sites (nba.com and basketball-reference.com) and normalization is only a minor issue. ‘Video games’ also proved to be an easy sync target, with the main problem being varying properties such as different release dates for different platforms (Microsoft Windows, Linux, MacOS X, XBox) and different regions (NA vs EU).

More difficult topics, such as ‘cars’, ’music albums’, and ‘music singles’ showed more potential for ambiguity as well as property variability. A major concern we found was Wikipedia pages that contain multiple infoboxes (often seen for pages referring to a certain type of car, such as this one). Reference and fact extraction can be done for each infobox, but currently, we run into trouble once we fuse this data. 

Further information about sync targets and their challenges can be found on our Meta-Wiki discussion page, where Wikipedians that deal with infoboxes on a regular basis can also share their insights on the matter. Some issues were also found regarding the mapping of properties. In order to make GlobalFactSync as applicable as possible, we rely on the DBpedia community to help us improve the mappings. If you are interested in participating, we will connect with you at http://mappings.dbpedia.org and in the DBpedia forum.  

BottomlineWe value your feedback

Your DBpedia Association

DBpedia Growth Hack – Fall/Winter 2019

*UPDATE* – We are now 5 weeks in our growth hack. Read on below to find out how it all started. Click here to follow up on each of our milestones.

A growth hack – how come?

Things have gone a bit quiet around DBpedia. No new releases, no clear direction to go. Did DBpedia stop? Actually not. There were community and board member meetings, discussions, 500 messages per week on dbpedia.slack.com.

We are still there. We, as a community, restructured and now we are done, which means that DBpedia will now work more focused to build on its Technology Leadership role in the Web of Data and thus – with our very own DBpedia Growth Hack – bring new innovation and free fuel to everybody.

What is this growth hack?

We restructured in two areas:

  1. The agility of knowledge delivery –  our release cycle was too slow and too expensive. We were unable to include substantial contributions from DBpedians. Therefore, quality and features stagnated.
  2. Transparent processes – DBpedia has a crafty community with highly skilled knowledge engineers backing it. At some point, we grew too much and became lumpy, with a big monolithic system that nobody could improve because of side effects. So we designed a massive curation infrastructure where information can be retrieved, adjusted and errors discussed and fixed.

We have been consistently working on this restructuring for two years now and we now have the infrastructure ready as horizontal prototype meaning each part works and everybody can start using it. We ate our own dog food and built the first application.

(Frey et al. DBpedia FlexiFusion – Best of Wikipedia > Wikidata > Your Data (accepted at ISWC 2019) .

Now we will go through each part and polish & document it, and will report about it with a blog post each.  Stay tuned !

Is DBpedia Academic or Industrial?

The Semantic Web has a history of being labelled as too academic and a part of it colored DBpedia as well. Here is our personal truth: It is an engineering project and therefore it swings both ways. It is a great academic success with 25,000 papers using the data and  enabling research and innovation. The free data drives research on data-driven research. Also, we are probably THE fastest pathway from lab to market as our industry adoption has unprecedented speed. Proof will follow in the blog posts of the Growth Hack series.

Blog Posts of the Growth Hack series:

(not necessarily in that order, depending on how fast we can polish & document )

  • Query DBpedia as SQL – a first service on the Databus
  • DBpedia Live Extraction – Realtime updates of Wikipedia
  • DBpedia Business Models – How to earn money with DBpedia & the Databus
  • MARVIN Release Bot – together with https://blogs.tib.eu/wp/tib/ incl. an update of https://wiki.dbpedia.org/Datasets
  • The new forum https://forum.dbpedia.org is already ready to register, but needs some structure. Intended as replacement of support.dbpedia.org

In addition some announcements of on-going projects:

  • GlobalFactSync (GFS)Syncing facts between Wikipedia and Wikidata
  • Energy Databus: LOD GEOSS project focusing on energy system data on the bus
  • Supply-Chain-Management Databus – PLASS project focusing on SCM data on the bus

So, stay tuned for our upcoming posts and follow our journey.

Yours

DBpedia Association

Home Sweet Home – The 13th DBpedia Community Meeting

For the second time now, we co-located one of our DBpedia community meetings with the LDK-conference. After the previous edition in Galway two years ago, It was Leipzig’s turn to host the event. Thus, the 13th DBpedia community meeting took place in this beautiful city which is also home to the DBpedia Association’s head office. Win, Win we’d say. 

After a very successful LDK conference May 20th-21st, representatives of the European DBpedia community met at Villa Ida Mediencampus,  on Thursday, May 23rd, to present their work with DBpedia and to exchange about the DBpedia Databus.  

For those of you who missed it or for those who want a little retrospective on the day, this blog post provides you with a short LDK-wrap-up as well as a recap of our DBpedia Day.

First things first

First and foremost, we would like to thank LDK organizers for co-locating our meeting and thus enabling fruitful synergies, and a platform for the DBpedia community to exchange.

LDK

The first presentation that kicked-off the conference was given by Prof. Christiane Fellbaum from Princeton University. The topic of her talk was on “Mapping the Lexicons of Signs and Words” with the main focus on her research of mapping WordNet and SignStudy, a resource for American Sign Language. Shortly after, Prof Eduard Werner from Leipzig University gave a very exciting talk on the “Sorbian languages”. He discussed the nature of the Sorbian languages, their historical background, and the unfortunate imminent extinction of lower Sorbian due to a decline of native speakers.

The first day of LDK was full of exciting presentations related to various language-oriented topics. Researchers exchanged about linguistic vocabularies, SPARQL query recommendations, role and reference grammar, language detection, entity recognition, machine translation, under-resourced languages, metaphor identification, event detection and linked data in general. The first day ended with fruitful discussions during the poster session. After at the end of the first conference day, LDK visitors had the chance to mingle with locals in some of Leipzig’s most exciting bars during a pub crawl.

Prof. Christian Bizer from the University of Mannheim opened the second day with a keynote on “Schema.org Annotations and Web Tables: Underexploited Semantic Nuggets on the Web?”. In his talk, he gave a nice overview of the research on knowledge extraction around the large-scale Web Data Commons corpus, findings, open challenges and possible exploitations of this corpus.

The second day was busy with four sessions, each populated with presentations on exciting topics ranging from relation classification, dictionary linking and entity linking, to terminology models, topical thesauri and morphology.

The series of presentations was ended with an Organ Prelude played by David Timm, the University Music Director at the Leipzig University. Finally, the day and the conference was concluded with a conference dinner at Moritzbastei, one of Leipzig’s famous cultural centres.

DBpedia Day

On May 23rd, the DBpedia Community met for the 13th DBpedia community meeting. The event attracted more than 60 participants who extended their LDK experience or followed our call to Leipzig.

Opening & keynotes

The meeting was opened by Dr. Sebastian Hellmann, the executive director of the DBpedia Association. He gave an overview of the latest developments and achievements around DBpedia, with the main focus on the DBpedia Databus technologies. The first keynote was given by Dr. Peter Haase, from metaphacts, with an unusual interactive presentation on “Linked Data Fun with DBpedia”. The second keynote speaker was Prof. Heiko Paulheim, presenting findings, challenges and results from his work on the construction of the DBkWiki Knowledge Graph by exploiting the DBpedia extraction framework.

Showcase session

The showcases session started with a presentation given by  Krzysztof Węcel on “Citations and references in DBpedia”, followed by Peter Nancke with a presentation on the “TeBaQA Question Answering System”, Maribel Acosta Deibe speaking about “Crowdsourcing the Quality of DBpedia” and finally, a presentation by Angus Addlesee on “Data Reconciliation using DBpedia”.

NLP & DBpedia session

The DBpedia & NLP session was opened by  Diego Moussallem presenting the results from his work on “Generating Natural Language from RDF Data”. The second presentation was given by Christian Jilek on the topic of “Named Entity Recognition for Real-Time Applications”, which at the same time won the best research paper at the LDK conference. Next, Jonathan Kobbe presented the best student paper at the LDK conference on the topic of “Argumentative Relation Classification”. Finally, Edgard Marx closed the session with an overview presentation on “From the word to the resource”.

 

Side-Event – Hackathon

The “Artificial Intelligence for Smart Agriculture” Hackathon focused on enhancing the usability of automatic analysis tools which utilize semantic big data for agriculture, as well as conducting an outreach of the DataBio project for the DBpedia community. The event was supported by PNO, Spacebel, PSNC, and InfAI e.V.

We improved the visualization module of Albatross, a platform for processing and analyzing Linked Open Data, and added functionalities to geo-L, the geospatial link discovery tool.  

In addition, we presented a paper about Linked Data publication pipelines, focusing on agri-related data, at the co-located LSWT conference.

Wrap Up

After the event, DBpedians joined the DBpedia Association in the nearby pub Gosenschenke to delve into more vital talks about the Semantic Web world, Linked Data & DBpedia.

In case you missed the event, all slides and presentations are available on our website. Further insights feedback and photos about the event can be found on Twitter via #DBpediaLeipzig.

We are currently looking forward to the next DBpedia Community Meeting, on Sept, 12th in Karlsruhe, Germany. This meeting is co-located with the SEMANTiCS Conference. Contributions are still welcome. Just ping us via dbpedia@infai.org and show us what you’ve got. You should also get in touch with us if you want to host a DBpedia Meetup yourself. We will help you with the program, the dissemination or organizational matters of the event if need be.

Stay tuned, check Twitter, Facebook, and the website, or subscribe to our newsletter for the latest news and updates.

 

Your DBpedia Association

Artificial Intelligence (AI) and DBpedia

Artificial Intelligence (AI) is currently the central subject of the just announced ‘Year of Science’  by the Federal German Ministry. In recent years, new approaches were explored on how to facilitate AI, new mindsets were established and new tools were developed, new technologies implemented. AI is THE key technology of the 21st century. Together with Machine Learning (ML), it transforms society faster than ever before and, will lead humankind to its digital future.

In this digital transformation era, success will be based on using analytics to discover the insights locked in the massive volume of data being generated today. Success with AI and ML depends on having the right infrastructure to process the data.[1]

The Value of Data Governance

One key element to facilitate ML and AI for the digital future of Europe, are ‘decentralized semantic data flows’, as stated by Sören Auer, a founding member of DBpedia and current director at TIB, during a meeting about the digital future in Germany at the Bundestag. He further commented that major AI breakthroughs were indeed facilitated by easily accessible datasets, whereas the Algorithms used were comparatively old.

In conclusion, Auer reasons that the actual value lies in data governance. Infact, in order to guarantee progress in  AI, the development of a common and transparent understanding of data is necessary. [2]

DBpedia Databus – Digital Factory Platform

The DBpedia Databus  – our digital factory Platform –  is one of many drivers that will help to build the much-needed data infrastructure for ML and AI to prosper.  With the DBpedia Databus, we create a hub that facilitates a ‘networked data-economy’ revolving around the publication of data. Upholding the motto, Unified and Global Access to Knowledge, the databus facilitates exchanging, curating and accessing data between multiple stakeholders  – always, anywhere. Publishing data on the Databus means connecting and comparing (your) data to the network. Check our current DBpedia releases via http://downloads.dbpedia.org/repo/dev/.

DBpediaDay – & AI for Smart Agriculture

Furthermore, you can learn about the DBpedia Databus during our 13th DBpedia Community meeting, co-located with LDK conference,  in Leipzig, May 2019. Additionally, as a special treat for you, we also offer an AI side-event on May 23rd, 2019.

May we present you the thinktank and hackathon  – “Artificial Intelligence for Smart Agriculture”. The goal of this event is to develop new ideas and small tools which can demonstrate the use of AI in the agricultural domain or the use of AI for a sustainable bio-economy. In that regard, a special focus will be on the use and the impact of linked data for AI components. 

In short, the two-part event, co-located with LSWT & DBpediaDay, comprises workshops, on-site team hacking as well as presentations of results. The activity is supported by the projects DataBio and Bridge2Era as well as CIAOTECH/PNO. All participating teams are invited to join and present their projects. Further Information are available here. Please submit your ideas and projects here.  

 

Finally, the DBpedia Association is looking forward to meeting you in Leipzig, home of our head office. Pay us a visit!

____

Resources:

[1] Zeus Kerravala; The Success of ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING Requires an Architectural Approach to Infrastructure. ZK Research: A Division of Kerravala Consulting © 2018 ZK Research, available via http://bit.ly/2UwTJRo

[2] Sören Auer; Statement at the Bundestag during a meeting in AI, Summary is available via https://www.tib.eu/de/service/aktuelles/detail/tib-direktor-als-experte-zu-kuenstlicher-intelligenz-ki-im-deutschen-bundestag/

Call for Participation – LDK Conference & DBpedia Day

Call for Participation LDK – Conference

With the advent of digital technologies, an ever-increasing amount of language data is now available across various application areas and industry sectors, thus making language data more and more valuable. In that context, we would like to draw your attention to the 2nd Language, Data and Knowledge conference, short LDK conference which will be held in Leipzig from May 20th till 22nd, 2019.

The Conference

This new biennial conference series aims at bringing together researchers from across disciplines concerned with language data in data science and knowledge-based applications.

Keynote Speakers

We are happy, that Christian Bizer, a founding member of DBpedia, will be one of the three amazing keynote speakers that open the LDK conference. Apart from Christian, Christiane Fellbaum from Princeton University and  Eduart Werner, representative of Leipzig University will share their thoughts on current language data issues to start vital discussions revolving around language data.

Be part of this event in Leipzig and catch up with the latest research outcomes in the areas of acquisition, provenance, representation, maintenance, usability, quality as well as legal, organizational and infrastructure aspects of language data.  

DBpedia Community Meeting

To get the full Leipzig experience, we also like to invite you to our DBpedia Community meeting, which is colocated with LDK and will be held on May, 23rd 2019. Contributions are still welcome. Just in get in touch via dbpedia@infai.org .

We also offer an interesting side-event, the Thinktank and Hackathon “Artificial Intelligence for Smart Agriculture”. Visit our website for further information.

Join LDK conference 2019 and our DBpedia Community Meeting to catch up with the latest research and developments in the Semantic Web Community. 

 

Yours DBpedia Association

One of 206 – GSoC 2019 – Call for students

 

Pinky: Gee, Brain, what are we gonna do this year?
Brain: The same thing we do every year, Pinky. Taking over GSoC.

Exactly what DBpedia plans to do this summer. We have been accepted as one of 206 open source organizations to participate in Google Summer of Code  (GSoC) again. Yes, ONE OF 206, let that sink in. The upcoming GSoC marks the 15th consecutive year of the program and is the 8th year in a row for DBpedia.

What is GSoC? 

Google Summer of Code is a global program focused on bringing student developers into open source software development. Funds will be given to students (BSc, MSc, Ph.D.) to work for three months on a specific task. For GSoC- Newbies, this short video and the information provided on their website will explain all there is to know about GSoC.

Time for a New Narrative

In the past years, we mentored many successful projects by female students but mostly male applicants. Now, it is time to change this narrative and work towards more diversity in science. This year, we at DBpedia are more determined than ever to encourage female students to apply for our projects. That being said, we already engaged excellent female mentors to also raise the female percentage in our mentor team. We are proud of all female DBpedians that help to shape the future DBpedia.

In the following four weeks, we invite all students, female and male, who are interested in Semantic Web and Open Source development to apply for our projects. You can also contribute your own ideas to work on during the summer. We are regularly growing our community through GSoC and can deliver more and more opportunities to you. 

And this is how it works: 3 steps to GSoC stardom

  1. Open source organizations such as DBpedia announce their projects ideas.
  2. Students contact the mentor organizations they want to work with and write up a project proposal.
  3. After a selection phase, students are matched with a specific project and a set of mentors to work on the project during the summer.
To all the smart brains out there, if you are a student who wants to work with us during summer 2019, check our list of project ideas, warm-up tasks or come up with your own idea and get in touch with us.

Further information on the application procedure is available via our DBpedia -Guidelines.  There you will find information on how to contact us and how to appropriately apply for GSoC. Please also note the official GSoC 2019 timeline for your proposal submission and make sure to submit on time.  Unfortunately, extensions cannot be granted. Final submission deadline is April 9th, 2019, 8pm CET.

Finally, check our website for information on DBpedia, follow us on Twitter or subscribe to our newsletter.

And in case you still have questions, please do not hesitate to contact us via praetor@infai.org.

We are thrilled to meet you and your ideas.

Your DBpedia-GSoC -Team