Can AI replace photographers?

Google Clips is small camera that uses AI to determine when to take pictures and in some cases about what. It is a small camera that can placed almost anywhere and it automatically takes photos of what it considers to be worth a photograph. Of course it does not move and it has to be placed. Clips then decide when to the photo. I understand that it likes movement. So if something moves in front of it, it snaps a photo. It also snaps a photo when a face comes close enough etc. It can recognise the faces that are those ones that are mostly saved on your smartphone from Clips. So it takes more photos of your kids than strangers. I have not used or seen Clip, but that is the basic concept I found out when I read about it. 


By clicking the photo you will get directed to Amazon. It is a affiliate link and if you purchase the item I will get a small commision, but you will have the same price. Image: Amazon.


The interesting question is: Can it replace a photographer? Without seeing the Clips in action I would say no. Even though AI is getting better and better it is not capable of making all the decisions that photographer makes when making photographs, yet. As photographer Ben Long said "The act of photography is the act of expression," (quote from CNN:s article about Clips). So it would be an act of expression of the one that wrote the algorithm, not the photographers.

Even though AI is not near? of getting to be on the level of a experienced photographer, it might be good enough for a common snapper. Clips might be able to take better photos than an average snapper. If placed correctly it can record images that might be good enough from a dinner party for example. Few Clips placed around the table might catch those images that can be used as memories from that party. Most likely it still cannot catch the decisive moment, but on the other hand how many snappers can?

The problem with AI for now is that it can most likely take photos that are taken by the common "rules" of photography. But that is just the start from where a photographer begins to make the image. Very seldom photographs that follow every rule of photography are interesting. The interesting and "a good" photographs are the ones where the "rules" have been broken. There is some creative element in the photograph that the photographer has put in the photo. So far AI cannot do that. But we never know when that is going to happen. Most likely the next step is collabaration between AI and photographers to get better images.

The link to the article from CNN that inspired me to blog about Clips and AI.


disclaimer. The link to Google Clips is a Amazon affiliate link. If you purchase Clips I will get a commision, but the price for you is the same.           

Adobe is getting really serious with AI.

Adobe introduced some new impressive AI driven features that are in the making. Adobe Sensei are the brains. As a photo and videographer the most impressive possible future features for me where of course the ones that had to with video and photography.

Content Aware Fill is a great toll, but it does not work with every image and situation. Deep Fill is Content Aware Fill with steroids. It does not only analyze the surroundings, but it also tries to understand whats really underneath the element we try to fill. Another technology that tackles the same problem is Scene Stitch. I think its even more impressive than Deep Fill. Scene Stitch will replace a part of an image by going through millions of image from Adobe Stock. It uses part of an image that it finds from the stock and replaces the part that you have selected. Scene Stitch rises a few questions about copyrights and about the document value of a photograph. I understand that this is not a problem with design, illustrations, marketing and advertising. The problem might arise in those if it crosses documentary photography. It challenges the truth and the documentary value of a photograph. 

Lets take an example that was pulled right out of Adobes YouTube-video about Scene Stitch.    
  

Before (screenshot from Adobes YouTube-video)

Before (screenshot from Adobes YouTube-video)

On of the After photos that Scene Stitch suggested (screenshot from Adobes YouTube-video)

On of the After photos that Scene Stitch suggested (screenshot from Adobes YouTube-video)

Scene Stitch gives several suggestions. The user then can pick the one that works best. The tennis court is taken from a Adobe Stock photograph. The question that I think is that what kinda of a document is the after shot? The pond is replaced with tennis courts. There was no mentions in the presentation if someone gets paid when parts of the images will be used. I would assume that there will be money involved when stock image is used. At least I hope so.

Of course these new features may or may not be part of future versions of Adobe Apps. Adobe is experimenting with different new things.

Links to the videos in YouTube that I mentioned in the post:

Scene Stitch

Project Deep Fill
 

The blogger is an Olympus European Visionary who´s native language is Finnish.