I saw this and was momentarily intrigued. Then I clicked on the pic to see it full size. It didn’t get any bigger and was therefore still unreadable. So I ended up having to go visit the original story at Journalism.co.uk – now the individual text was readable but you couldn’t get a sense of the meaning of the whole without going full-screen to this from Mattermap. And then? All it turns out to be is a grouped collection of tweets, which were all available and more easily readable in the text of the website below anyway. I got there in the end but three clicks, some head-scratching and a scroll later. Sometimes good old-fashioned text is all you need!
A few months ago, I mentioned how academia.edu provides near-real-time tracking of visits and visitor demographics for any publications and used my own paper “Are We All Online Content Creators Now? Web 2.0 and Digital Divides” as an example to show how it works. Recently I came across a new tool that makes this tracking process much easier – Altmetric is an academic-focused social media tracking company. It makes its money selling large-scale data to publishers and academic institutions but if you are an academic and you have published anything with a DOI (or you want to find out something about the social media footprint of a competitor’s work for that matter) then Altmetric can display a “dashboard” of data like this. I’m quite pleased about this particular metric:
I’d read and heard about the horrific tsunami that hit Japan three years ago, but none of it moved me in the way this simple podcast eyewitness testimony did. Audio is the most intimate of mediums, and lets the mind fill in its own pictures of the events described which I think are more vivid than any video could be. And unlike many conventional documentaries and news programmes, this 15 minute first person format let the witness’ testimony speak for itself.
Carl Pillitteri, a Fukoshima nuclear engineer told this story at a Moth event (the Moth is a non-profit organization which runs events where people talk about their lives live and without notes).
If you are using images online as a journalist you need to ensure that you have the rights to put them on your site legally. If you do a Google image search, click on “search tools” and select “usage rights” that’s one way to ensure what you’re finding you can use, but in addition image libraries like Getty Images contain a lot of very high quality images (> 35m at last count) including pictures relating to the latest news. This is why they can charge for them and put watermarks over the images you can see for free so you don’t pirate them. Now, however, tired of trying to fight the many online pirates of their content, Getty seems to have decided to make it easy for people to use their images online for free in controlled ways with attribution.
They are defining “non-commercial” (and therefore permissible) uses of their images quite broadly so as long as you use their image embedding tool you should be able to legitimately use their many pictures on most journalistic projects online (for print use you would still need to purchase them). There is already speculation that the other major picture agencies may do likewise. Here’s how to take advantage of Getty Images’ new embed feature (and its limitations).
Getty’s “front page” for searching embeddable images is here.
I love hearing about the latest digital tools that help one operate as a journalist/researcher whether that be twitter search and monitoring tools, bookmark management tools, people search tools etc. “Search : theory and practice in journalism online” by Dick is particularly good for finding and describing this stuff – but I am not aware of any articles that bring the different pieces together to describe all the key online tools a journalist uses and how they all go together into a work flow. I plan to come up with something myself to share with students and if I do I will post it here but I would love to hear what other people are using.
I’m as excited as anyone about the potential for organizations and governments to use the ever-increasing amounts of data we’re ’sharing’ (I prefer the less value-laden ‘giving off’) because of our love of smartphones and the like. So I enjoyed this presentation by Tom Raftery about “mining social media for good”.
(Slideshare ‘deck’ here)
And I am sure his heart is in the right place, but as I read through the transcript of his talk a few of his ‘good’ cases started to seem a little less cheering.
Waze, which was recently bought by Google, is a GPS application, which is great, but it’s a community one as well. So you go in and you join it and you publish where you are, you plot routes.
If there are accidents on route, or if there are police checkpoints on route, or speed cameras, or hazards, you can click to publish those as well.
Hm – avoid accidents and hazards sure – but speed cameras are there for a reason, and I can see why giving everyone forewarning of police checkpoints might not be such a hot idea either.
In law enforcement social media is huge, it’s absolutely huge. A lot of the police forces now are actively mining Facebook and Twitter for different things. Like some of them are doing it for gang structures, using people’s social graph to determine gang structures. They also do it for alibis. All my tweets are geo-stamped, or almost all, I turned it off this morning because I was running out of battery, but almost all my tweets are geo-stamped. So that’s a nice alibi for me if I am not doing anything wrong.
But similarly, it’s a way for authorities to know where you were if there is an issue that you might be involved in, or not.
To be fair Tom does note that this is “more of a dodgy use” than the others. And what about this?
A couple of years ago Nestlé got Greenpeace. They were sourcing palm oil for making their confectionery from unsustainable sources, from — Sinar Mas was the name of the company and they were deforesting Indonesia to make the palm oil.
So Greenpeace put up a very effective viral video campaign to highlight this [...] Nestlé put in place a Digital Acceleration Team who monitor very closely now mentions of Nestlé online and as a result of that this year, for the first time ever, Nestlé are in the top ten companies in the world in the Reputation Institute’s Repute Track Metric.
Are we talking about a company actually changing its behaviour here or one using their financial power to drown out dissent?
You should definitely check out this talk and transcript and if we’re going to have all this data flowing around about us it does seem sensible to use some of it for good ends – there are certainly many worthy ideas outlined in it. But if even a presentation about the good uses of social media data mining contains stuff that is alarming, maybe we should be asking the question more loudly whether the potential harms outweigh these admitted goods?
Just as the old year passes I have finished off the last substantive chapter to my upcoming book. Now all I have to do is:
- Add a concluding chapter
- Go through and fill in all the [some more clever stuff here] bits
- Check the structure and ensure I haven’t repeated myself too often
- Incorporate comments from my academic colleagues and friends
- Submit to publisher
- Incorporate comments from my editor and their reviewers
- Index everything
- Deal with inevitable proofing fiddly bits
- Pace for months while physical printing processes happen… then…
- I Haz Book!
Doesn’t seem like too much further, does it?
Update Jan 2, 2014 – I have finished my draft concluding chapter, which ends, “[some form of ringing final summing-up here!]“
Just as we are all finding out how much the government has been tracking our meta-data, a whole ecosystem of public-facing meta-data tracking services is arising, giving us the chance to measure our own activity and track the diffusion of our messages across the web. This is particularly noticeable when looking at Twitter but other social media also increasingly offer sophisticated analytics tools.
Thus it was that as my latest open access paper “Are We All Online Content Creators Now? Web 2.0 and Digital Divides” went live two days ago I found myself not just mentioning it to colleagues but feeling obliged to update multiple profiles and services across the web – Facebook, Twitter, academia.edu, Mendeley and Linkedin. I found to my surprise that (by tracking my announcetweet using Buffer) only 1% of the thousands I have ‘reached’ so far seem to have checked my abstract. On the other hand, my academia.edu announcement has brought me twice as many readers. More proof that it’s not how many but what kind of followers you have that matters most.
Pleasingly, from Academia.edu I can also see that my paper has already been read in Canada, the US, Guyana, South Africa, the Netherlands, Germany, Poland, and of course the UK.
The biggest surprise? Google can find my paper already on academia.edu but has not yet indexed the original journal page!
I will share more data as I get it if my fellow scholars are interested. Anyone else have any data to share?
It has long been understood by scientists (but not by enough parents) that the amount that children are talked to has a crucial impact on their later educational development so I was pleased to see the New York Times pick this story up. However it rather wastes this opportunity because it is so clumsily written – particularly in its handling of statistics.
The first paragraph is confusing and unhelpful “…by age 3, the children of wealthier professionals have heard words millions more times than those of less educated parents.” Clearly, rich kids don’t hear millions of times more words than poor ones but that might be what you pick up from a quick scan. Further down the story, “because professional parents speak so much more to their children, the children hear 30 million more words by age 3 than children from low-income households”– unfortunately, this is meaningless unless you know how many million words both kinds of children heard overall. The difference is only hinted at near the end of the piece when you finally find out (through a different study) that “some of the children, who were 19 months at the time, heard as few as 670 “child-directed” words in one day, compared with others in the group who heard as many as 12,000″.
Very annoyingly, despite saying the 20 year old study in the first paragraph was a “landmark” there is no link to the study on the website or information to guide readers so they could find it later. The story makes reference to new findings being based on a “small sample” but doesn’t say how small.
Crucially while it seems to suggest that pre-kindergarden schooling could make up for this gap, it presents no evidence for this. Intuitively, to solve this particular problem a big push to get parents to talk to their babies and small children would be much more effective since they spend much more time with them than any educator could.
Ironically there was a much better-explained story on the same issue also from the NYT back in April – but not alas in the print edition.
So Tim could you take this as a reasonable excuse to bring some important research to the public eye, and Motoko (whose work on the future of reading I have liked a great deal) could you go back to the piece online and tidy it up a bit if you get the chance?
Like many a tech-savvy parent I am trying to divert my kid’s gaming attention towards Minecraft – and with some success. There’s a ‘legacy’ iBook G4 he can use but getting the program to run at all was difficult and now that it is running, I have found it runs unusably slowly, even with all the graphical options I could find turned down (and with non-working sound). This to run a game that is deliberately designed to look ‘retro’ and which I imagine could have worked on a Mac LC c. 1990 if suitably coded! Since it’s a very popular game with a hyperactive development community I thought there was bound to be a way to make things work better. Alas, nothing I tried (Magic Launcher launching Optifine Light mainly) seemed to work and it took me several hours of forum reading, installation and tweaking to get this far.
It’s not a new observation but what makes older machines like my nine-year-old macbook obsolete does not actually seem to be the speed or capability of the underlying hardware but the steady ratcheting up of the assumptions that software makes. Somewhere (presumably in Java, which is Minecraft’s ‘environment’) I’m guessing there’s a whole load of un-necessary code that has been added in the last nine years which has dragged what should be a perfectly usable game down to a useless speed.
Just to drag this back to academic relevance for a moment, this is to my mind a good example of how the structure of the computer industry aggravates digital divides by gradually demanding users ‘upgrade’ their software to the point that their machines stop working, well before the end of their ‘natural’ lives.
PS If anyone has managed to get Minecraft working adequately on a Mac of similar vintage please share any tips…