The rise of AI imagery is of real concern to public discourse: The actress Rosie O’Donnell shared this AI image from TikTok of a Palestinian mother dragging her children and belongings down a rubble-strewn road. She believed it was real stating that it was not an AI image — but she later deleted the post.
The WPP’s Open Format category had allowed submissions of images partially created with a photo-editing tool known as generative fill, which automatically creates or removes elements in a photograph, sometimes through a text prompt.
Sony’s new in-camera solution creates a digital signature at the time of capture, and unlike Leica’s M11-P, Sony’s answer to the “fake news” problem does not require specialized hardware inside its cameras. Existing cameras, like the Sony a1 and a7S III will support in-camera signature and C2PA authentication alongside the upcoming Sony a9 III, which is slated to be a compelling new camera for many photojournalists.
Gruesome photographs of Palestinian children killed in rocket strikes and Israeli infants murdered by terrorists. Digitally doctored images that whip around social media before they can be verified. Accusations — since rejected by multiple news outlets — that photojournalists had advance knowledge of the Hamas surprise attack on Oct. 7.
Today, The Washington Post went further than any mainstream news organization has ever gone before in showing the brutality and devastation of something that plagues this nation: mass shootings.
I have lived through the move from paste-up to digital design, saw print reduced to almost nothing, led a photo team that was forced to pivot to video and helped design many apps that were supposed to save us all. None of those changes moved at the speed of generative AI.
Jan Grarup, a celebrated Danish war photographer has been put on the spot allegedly for ‘magnifying’ his role in a number of events including the 1994 Genocide against the…
Politiken, the media outlet he works for released a statement about him, noting that he admitted that “his memory has failed him in connection with several stories” regarding how he covered the 1994 Genocide against the Tutsi.
A barrage of social media disinformation related to the escalating Israel-Palestine war has made it far more difficult to discern the credibility of visual evidence
As the lines between the real and the artificial blur, photography’s role in preserving and portraying reality becomes even more paramount. While GenAI forces a reevaluation of the purpose and essence of photography, it certainly doesn’t diminish its value. Instead, it pushes photographers to evolve, to be more discerning in their approach, and to capture the world with an authenticity that only they can provide.
An “algorithmically driven fog of war” is how one journalist described the deluge of disinformation and mislabelled footage on X. Videos from a paragliding accident in South Korea in June of this year, the Syrian civil war in 2014, and a combat video game called Arma 3 have all been falsely labelled as scenes from Israel or Gaza. (Inquiries I sent to X were met with an e-mail reading, “Busy now, please check back later.”)
You could argue that what I described in the preceding paragraph is bad enough for it to disqualify the man’s photography. I would certainly agree. But Newton’s photography actually is a lot worse for additional reasons that I’m hoping to make clear in the following.
Michael Christopher Brown used the artificial intelligence (AI) image generator Midjourney to produce a series of images that explores historical Cuban events and the realities of Cubans attempting to cross the 90 miles of ocean that separate Havana from Florida.
In this Op-ed, independent photography director and educator Amber Terranova discusses one of the most controversial AI imagery projects in recent weeks.
I applied as a cheeky monkey, to find out, if the comeptitions are prepared for AI images to enter. They are not.
We, the photo world, need an open discussion. A discussion about what we want to consider photography and what not. Is the umbrella of photography large enough to invite AI images to enter – or would this be a mistake?
With my refusal of the award I hope to speed up this debate.
The problem of exploitation in photography is a topic I see trending particularly toward dogmatic and cyclical patterns. Exploitation in photography is broadly the photographers’ “unfair” use of depictions they include in their photographs (people, places, events, etc) for their selfish benefit or at least a benefit that does not reach their subjects.
With this understanding, the common tendency to fault discussions about ethics in photography is quite odd. Why wouldn’t you want to understand the possible obligations that come with powerful photography? What is the mindset behind arguing against taking responsibility for what you are creating?
In recent years, artificial intelligence engineers have used millions of real photographs—taken by journalists all over the world, and without those journalists’ permission—to train new imaging software to create synthetic photojournalism. Now anyone c
The other thing to add to the puzzle is, if you start making millions of synthetic images, then the new AI will be training on those images as well. The concept of history will become more and more distorted, because they’ll be training on the images that are not made by cameras, but made according to the way people want to see the world. What happens if people have five million images of World War II according to the way they want the war to look, and they look like photographs, so that’s what the AI is going to be training on in the future?
Documentary-making has never been ethically pure or entirely subjective. (“I’m working on a project that is the kind of documentary where you do six takes of the person putting a boat in the water to get the right one,” one editor told me.) Every shot and every cut is a choice, and even its practitioners have never agreed on whether the medium is closer to journalism or to cinema. One of the earliest popular documentaries, Robert Flaherty’s 1922 film, Nanook of the North, was about a man supposedly living in the Canadian tundra, untouched by the wider world — and it was full of lies. Nanook’s real name was Allakariallak. His wife in the film wasn’t his wife. (She was, according to another local, one of Flaherty’s multiple wives.) Allakariallak hunted with a gun, but that didn’t fit the story Flaherty wanted to tell, so the director asked him to use a harpoon. In defense of his methods, Flaherty said, “One often has to distort a thing in order to catch its true spirit.”
The photo, taken by Times staff photographer Allen J. Schaben, is not gory, and was taken from a distance. But the shooter is dead and the caption reads, “Officials investigate after the suspect died of a self-inflicted gunshot wound in Torrance on Sunday.” (If you want to see the front page of the Times, click here.)