Just when you thought Wikipedia couldn’t get any bigger, we’re getting a new “Wiki-tool.” Development for this new arm of the “Wikiverse” began March 1, and is to be named “Wikidata.” Funding, in part, will come from Google, and it is the first new project to come out of the Wikimedia Foundation since 2006. This new database has begun with the German arm of Wikipedia, and here’s how the project’s goals are described:
“The goal of Wikidata is to create a common data repository. It will make it possible to collect and curate data and then use it within the Wikipedias and by 3rd-parties. One example: Wikidata contains the birth date of a person of public interest associated with the person’s name. This can then be used in all Wikipedias (and outside of them) and only needs to be maintained in one place. This is similar to how Wikimedia Commons works – just not for multimedia but for data.”
Wikidata – Good? Not so much?
The positive aspect is that you’ll be able to get answers that aren’t necessarily easy to find. How many times have you wanted to see some type of data? For example, you might want to ask, “How many smartphone users are there in the Tampa area?” or “How many cat owners are there in Heidelberg,PA? Some questions, both serious and silly pop into my mind on a daily basis, and unless I take the time to research the answer, there’s nowhere I can just click to and pull the data down.
You can get some stuff now, of course, but it either takes a bit of searching OR in many instances, you have to pay to get the answer.
Having Wikidata might have been very useful in my old journalism days when I’d have to go to the library and pour over huge volumes of data to find that perfect stat for my query letter or article. This project might have made life much simpler.
Or would it have?
UGC is not reliable and it won’t be at Wikidata, either.
Wikidata is going to be just like Wikipedia in that it will be UGC (user-generated content) in many instances. So, how reliable will it be? I mean, when I write something — anything from a blog post to a book, I want the data I use in that work to be 100% accurate. I fear that just as with Wikipedia, the information you get may not be 100%, and with the volume of data they plan to include, there’s no way to vette all of the information.
So, what do you think? Good thing? Bad thing?
I’m seeing it as a rough estimate type site. So, for example, if you’re about to give a presentation to a client and you need a rough stat, it will probably be OK to use. But if you’re writing an article for the Wall St. Journal, not so much.
Just worries me. I mean, a lot of people already take Wikipedia as gospel. It’s listed first for many, many keywords online, and this trend to take what’s inside Wikipedia is disconcerting, and I know people who tell me it’s their favorite website online. Hmm…
What are your thoughts? Good? Bad? Or do you think you’ll use it at all?