Why Nature will not allow the use of generative AI in images and video

160 points
1/20/1970
10 months ago
by geox

Comments


throw101010

> Nature will not be publishing any content in which photography, videos or illustrations have been created wholly or partly using generative AI, at least for the foreseeable future.

Allow me a bit of a rhetorical question, what are the chances they already publish photos taken on devices that apply by default some form of AI-based generative/corrective algorithms like the "AI detail enhancement engine" by Samsung (the one they use to enhance photos of the moon)?

10 months ago

analog31

I'm peripherally involved in this scene. The answer is that the journals don't want processed images, but of course the scientist doesn't always know what kind of processing happened to the image en route to their display and file system. The idea is that an image supposedly constitutes "data" and as such, should represent raw data.

Also, what constitutes "raw data" is itself a matter of debate. How raw is raw? Like any interesting pursuit, scientific publishing struggles to keep up with developments in technology.

10 months ago

codetrotter

> How raw is raw?

Certainly no jpeg image produced by any digital camera is really “raw” as it will already have been through a debayering filter

https://en.wikipedia.org/wiki/Bayer_filter

And then on top of that is the JPEG compression artifacts.

But I do wonder how many raw files also contain data that has been debayered already. I have not looked into that.

I know that with third party firmware such as Magic Lantern it is possible to get the image data without debayering. https://magiclantern.fm/

Likewise I know that the Camera Module 3 for Raspberry Pi is possible to retrieve the image data from without debayering.

10 months ago

Wowfunhappy

I feel like there's a meaningful difference between that stuff and the computational photography common on smartphones.

10 months ago

astrange

There may be a meaningful difference between demosaicing and generative AI - actually there isn't because demosaicing/upscaling/image generation are all the same problem, but there might be one since people like to think of them as different.

There isn't a difference between auto white balance and generative AI though. The colors in an auto mode digital camera picture are not real.

10 months ago

robocat

You can only get real colours in a raw format digital picture.

10 months ago

frostburg

Not really. First of all the colours aren't "real" even in that raw, and second if you accept that as real, you can get per-pixel chroma information with Foveon sensors, pixel-shift systems, monochrome sensors with lens filters...

10 months ago

astrange

Well, a more fundamental issue is that the colors of an object in a scene aren't "real" if the lighting in the scene is not daylight white. And most of the time you're not there to appreciate the low-CRI yellow indoor lights.

10 months ago

sudosysgen

The RAW data for the vast majority of MILCs and DSLRs is pre-debayering.

10 months ago

userbinator

Even Android smartphones will give pre-debayering raw data from the sensor if you use the appropriate camera app. (There are quite a few cheap ones where the OEM debayering filter is just horrible, and the sensor is actually capable of much better quality.)

10 months ago

morphicpro

[flagged]

10 months ago

jacquesm

Astronomy is in for a hard time then, anything that uses false color is technically very much processed.

10 months ago

lkbm

This is certainly not a "no processing" policy.

Where the line is drawn as to what's "generative" and what's "AI" may be blurry, but they haven't just banned traditional transform operations.

10 months ago

progrus

I think if it gets all the way to a computer model rendering, where the raw data input is in no way shaped like an image “yet”, the distinction between traditional and new-generative-model approaches sounds more like a difference in degree.

10 months ago

dylan604

passing photons through a filter in front of the sensor is absolutely not even close to being the same as "AI" post processing of the data.

10 months ago

TeMPOraL

But the thing they pulled to give us the "photo" of that black hole absolutely is.

10 months ago

jxramos

I like that > Like any interesting pursuit, scientific publishing struggles to keep up with developments in technology.

I'm going to keep that in mind, there does seem to be this interesting human nature presumption that everyone keeps in sync with the latest and greatest. But that's simply not the case.

10 months ago

Ajedi32

I mean, in digital photography "raw" has a very well defined meaning: https://en.wikipedia.org/wiki/Raw_image_format

10 months ago

joshspankit

Since enhancements (and “enhancements”) are going to get more pervasive, it feels like a good time for smartphones and cameras to add a “scientific” setting that only stores the unprocessed sensor data.

10 months ago

dclowd9901

I’d say “raw” is light imprinted on film. I may be biased but wouldn’t mind seeing 35mm make a comeback.

10 months ago

dylan604

that's not what "raw" means though, and this is a really weird interpretation

10 months ago

lm28469

> I may be biased but wouldn’t mind seeing 35mm make a comeback.

It's on the rise for the past years

10 months ago

ForestCritter

Yes, me too. I have a Minolta Rangefinder in a box that needs the crank spring fixed. I've kept it because it captures the lighting exactly like I see it with my eyes. I would get the best night/ twighlight/ evening pictures with it and I would specify no color correction at the developer. I don't think digital photographers understand what it is to capture the light'. I have a newer Minolta in working condition that also takes excellent quality pictures.

10 months ago

chasing

“Generative AI.” I know there are kind of weird edge cases. “My iPhone made the sunset way redder than it was in real life.” But I think we all know what they’re talking about and I suspect if you’re in a position for it to really be a concern then you will communicate with Nature and sort out what their comfort zone is.

10 months ago

etrautmann

This is fascinating and gets pretty interesting. As a computational neuroscience person, some of the more advanced neural signal processing algorithms use generate models internally to model recorded neural data. The result is likely a smoothed, simplified, and hopefully more interpretable view of neural data, but there's no guarantee that some portion of the resulting multidimensional signal isn't hallucinated.

As a result, most findings should be validated by verifying that some property of interest is present in the high dimensional raw neural data, though that's only conceptually possible sometimes.

10 months ago

[deleted]
10 months ago

CharlesW

> …AI-based generative/corrective algorithms…

Every photo is touched by "corrective algorithms". Nature is talking about generative AI specifically, which means using an LLM to generate part or all of an image. This precludes using Midjourney, Photoshop's new "generative fill", etc.

I assume that what Samsung's "Space Zoom" feature does — replacing elements with higher-quality stock photography — was already disallowed. If so, whether the elements were identified/replaced manually or automatically isn't really a concern from an editorial perspective.

10 months ago

ghaff

Yes, they already had guidelines for photographs. [1]

e.g. "Digital images submitted with a manuscript for review should be minimally processed. A certain degree of image processing is acceptable for publication (and for some experiments, fields and techniques is unavoidable), but the final image must correctly represent the original data and conform to community standards. Editors may use software to screen images for manipulation."

[1] https://www.nature.com/nature-portfolio/editorial-policies/i...

10 months ago

dclowd9901

Sounds like this would allow for processed astronomical photography too. Methinks this questions wasn’t in earnest.

10 months ago

ghaff

No reason to assume bad faith. But some folks get very literal. And if you have a legitimate question, that's one of the things editors are for.

10 months ago

MiguelX413

How might one use a Large Language Model “to generate part or all of an image”?

10 months ago

seabass-labrax

'Text-to-image' systems like Stable Diffusion really are Large Language Models: an encoder such as BERT creates a mapping from text tokens to the latent space of the image generation model. As part of this training step, the system is learning the concepts of certain words and grammatical constructs.

There are quite a few in-depth explanations of the whole system; here's one for instance: https://jalammar.github.io/illustrated-stable-diffusion/

10 months ago

PartiallyTyped

Something like this:

    Hey ChatGPT, write a prompt for midjourney to generate a realistic photo of XYZ with ABC parameters.
Then plug it into Mid-journey.

Technically the LLM isn't generating the image, and I agree, but I think their point is rather obvious and we need not be intentionally obtuse nor needlessly pedantic.

10 months ago

ghaff

I think you can argue that there is significant daylight between "created wholly or partly using generative AI" and the sort of ML-based noise reduction, sharpening, etc. that you see in products like Lightroom and Photoshop. Of course, the whole area will evolve and rules like these will have to evolve as well. News photography has dealt with this since pre-digital although different publications may have different standards.

10 months ago

hgsgm

They'll have to use an AI to determine what manipulation counts as AI.

10 months ago

morphicpro

There is also a big distinction between fully disclosing that an image has AI vs the non post production edits. So why not just ask people to just be honest and disclose about the means of creating the content and disclose the images as being "AI" created vs telling people they should to be purists in their craft. Is the claim that the art made by AI itself is harmful or that the act of making it causes harm. Where is the real harm in this given people are properly informed as the the type of content they are looking at? The reasons why is because this has nothing to do with art and expression and everything to do with control under the guise of fear.

10 months ago

[deleted]
10 months ago

m3kw9

For them purposely asking stable diffusion(not ok) for an image vs iPhone image processing(ok) would be your baseline for distinguishing. Picking at small details seem like a nice way to waste time, you just got to keep it simple

10 months ago

LapsangGuzzler

> devices that apply by default some form of AI-based generative/corrective algorithms like the "AI detail enhancement engine"

Isn't this a contradiction, though? My understanding is that generative AI is created entirely from software, using a network of previously created images as input. A corrective filter modifies an image taken directly from a sensor instead.

I personally don't mind aesthetic corrective modifications to photos. I was an astronomy observatory last night and learned that most of the magnificent images we've seen of distant nebulas and galaxies have post-production coloring applied, they mostly look black and white coming off the sensor. Does the coloring fundamentally change our understanding of what it is that we're looking at? I don't think so, and that's where I draw the line.

10 months ago

dragonwriter

> My understanding is that generative AI is created entirely from software, using a network of previously created images as input. A corrective filter modifies an image taken directly from a sensor instead.

Your understanding is incorrect, generative AI can modify an image taken from any source, as well as creating from scratch.

10 months ago

Imnimo

>Artists, filmmakers, illustrators and photographers whom we commission and work with will be asked to confirm that none of the work they submit has been generated or augmented using generative AI

"or augmented"

10 months ago

golemotron

There's no hard line in the technology. This means that a ban is pointless because the landscape is going to keep changing.

It's interesting to compare this to other situations where, say, law tries to create lines that aren't really there and the incentive to ignore imaginary ones is greater than the incentive to keep them.

This seems to be a very common phenomenon with technology.

10 months ago

pxc

In the case of Nature, it functions as a statement of values that scientists publishing with Nature will be happy to comply with to the best of their abilities.

I doubt that the editors are under some illusion that the nominal ban will create a hard line in reality. I'd be surprised to learn that that is their idea of success with this measure.

10 months ago

ghaff

The context matters. There are image manipulations I might do to a photo I'm going to hang on the wall that wouldn't be kosher if I were shooting an event for a newspaper especially with respect to removing objects from the photo.

10 months ago

charcircuit

Some generative AI tools let you input a base image to work off of. You can definitely use generative AI for just sharpening in these tools.

10 months ago

belter

It was not a problem when presenting the EHT results and subsequent "images" - https://www.space.com/first-ever-black-hole-image-ai-makeove...

10 months ago

criddell

Were those produced by a generative AI?

10 months ago

belter

No, but under the same algorithmic principles. How do you think that yellow color come about?

10 months ago

renonce

The part of the article discussing problems with generative AI seems to lay emphasis on "attribution". The problem with generative AI is not that they want raw and unedited images, but that they are unable to properly attribute generated images to their sources. That's what makes the distinction between a generative AI and regular image editing tools like cropping, scaling, filtering, etc. So something like "AI detail enhancement engine" should not be problematic under such a definition as the tool can simply be attributed to Samsung.

10 months ago

asynchronous

Probably close to 100% at this point.

10 months ago

Kapura

I am not in the game at all but as I understand it Samsung doesn't make high quality DSLRs of the type used by photographers. I reckon that photographers would be asked to disclose this sort of thing if submitting to Nature in future.

10 months ago

0xBABAD00C

> what are the chances they already publish photos taken on devices that apply by default some form of AI-based generative/corrective algorithms

100%?

10 months ago

ChatGTP

On the other hand, I really hate what these algorithms do to my photos, even my new iPhone which is considered good tech. So I get it.

10 months ago

firefoxd

I'm sad to see the comments here arguing about small details and nuance. What if the image is from a phone that use ai to do blah blah blah.

The reality is we all know what kind of images to expect from Nature. Generative Ai is not appropriate there and we all know it.

10 months ago

slyall

If you look at the cover of the magazine[1] about 50% of them are not actual photos of real life.

eg 18 May 2023[2] "The cover shows an artist’s impression of two male mammoths fighting"

or 20 April 2023[3] which shows the DART spacecraft, apparently photographed from nearby in space.

[1] https://www.nature.com/nature/volumes

[2] https://www.nature.com/nature/volumes/617/issues/7961

[3] https://www.nature.com/nature/volumes/616/issues/7957

10 months ago

matteoraso

Yeah, this backlash is really weird. The only time where generative AI images are appropriate in an article is when the article is actually about generative AI, and Nature isn't banning that. What's the problem?

10 months ago

edanm

Wait, what? Why do we all know that Generative AI is not appropriate?

If I just want some random artwork, like an image of, I don't know, a blackboard, why is using Generative AI inappropriate?

10 months ago

m3kw9

Not sad just annoying

10 months ago

inciampati

I just published conceptual art that I created using Midjourney in Nature!

Ironically, Nature's own licensing rigor drove me to generate this art. It was replacing content that had come from other sources, where the time to obtain and clear copyright was too long for our timeline. More hilariously, one of the images that I replaced was from the US government, and in the public domain. The other was from a consortium in which I am part of the project leadership.

They seemed perfectly okay with this, as long as I proved to them that I had the professional Midjourney account where copyright is not encumbered. I wonder when they will again allow this kind of use.

10 months ago

firefoxd

Can you share the article?

10 months ago

abeppu

I think it's unfortunate that they feel pushed to have a blanket policy. Not all images hold themselves out to be representative of a specific truth. If an article calls for an illustrative diagram of, e.g. a generic manifold representing energy associated with points in a parameter space, in context, readers should understand it as a hypothetical case whose specific attributes are not the focus, and there isn't really an opportunity to be 'misled' by it. If an article needs a microscopy image of tissue that has been treated by some factor being studied, then swapping in a DALL-E image in place of one produced through actual microscopy (and post-processing) _would_ be misleading. But the context of what the image purports to represent is critical.

10 months ago

rflrob

One thing that’s confusing is that Nature has two purposes: first as a scientific journal, and second as a science news magazine. They’re bundled in the same physical issue (though there are also branches of the journal, eg Nature Genetics, Nature Chemistry, etc), but internally handled by different staff. I suspect the policy will mostly be relevant to the news magazine side, though you would also want to ensure that a paper on the journal side doesn’t include an AI generated image in a non-AI context.

I just asked DALLE for “A scientific illustration of a membrane bound protein being phosphorylated”, and while the results aren’t all that credible, I could imagine using them as a starting point.

10 months ago

infoseek12

A lot of diagram generating tools are starting to incorporate generative AI of some form. In some instances the UI probably won’t make it clear that underlying LLM technology is being used.

I wonder if their graphics designers will need to move from industry standard software to something less capable. Interestingly the Amish may have been ahead of their time in creating purposely limited technology that was compatible with their beliefs (https://www.npr.org/sections/money/2013/02/25/172886170/a-co...).

10 months ago

goerz

Which diagram generating tools?

10 months ago

infoseek12

Adobe and Figma are two of the leading companies in that space. They seem to be integrating the new tech at a fairly rapid pace:

Adobe - https://www.adobe.com/sensei/generative-ai.html

Figma - https://www.figma.com/community/plugin/1145446664512862540/A...

Some new tools have popped that are centered around generative AI (I have no idea if they’re any good):

Prototypr - https://prototypr.io/toolbox/diachat

Diagram - https://diagram.com/?ref=Welcome.AI

10 months ago

goerz

Cool. Thanks!

10 months ago

mgraczyk

The first justification in the article is silly and detracts from Nature's position: "we all need to know the sources of data and images, so that these can be verified as accurate and true"

How do you verify whether this cartoon illustration of of stacks of money against a red background is "accurate and true"?

https://media.nature.com/lw767/magazine-assets/d41586-023-01...

Would it have made a difference if that image were generated by Midjourney?

The actual reasons, given later in the article, are that Nature is taking a political/legal position on copyright and privacy. That's fine by me, but it's disappointing that they give a misleading and nonsensical justification before the actual justification, as if to make their stance sound less political.

10 months ago

seydor

This is unimportant - and they are doing it for attribution reasons.

But it is irony of the ironies for Nature which sources all its content AND revisions from the open community to say they care about fair copyright compensation of creators.

10 months ago

HPMOR

Of course they care about copy right! It is their whole business model after all!! Scihub is “bad” because “copyright” ergo generative AI is “bad” because copyright.

10 months ago

seydor

Hold on, so you are sayign that because GenAI is using open content it can't be copyrighted properly? Hmm, i wonder who else is using publicly-funded content and editors ... And by the way things like adobe's genAI are definitely using licenced content but nature doesnt even allow that

Aren't they unlegitimizing their own business model by claiming such things?

10 months ago

wilg

> For now, Nature is allowing the inclusion of text that has been produced with the assistance of generative AI, providing this is done with appropriate caveats

Why not just apply this rule to all media? What is the purpose of singling out images and video?

10 months ago

theodric

From my perspective in 2023: based. But in 50 years time will be regarded as a bizarrely conservative, even Luddite, position (unless GPT-9 ends up kicking off WW4).

10 months ago

firatsarlar

This is just one of many perspectives, highlighting an approach based on results. Generative AI is simply a process that produces output aligned with our expectations. It should revolve around managing expectations and embracing different perspectives. Rather than delving into mystery and religion, which might take us outside the realm of academia, perhaps we could explore alternative fields. Let's consider an image of a sleeping cat: I attempted both photographing a real cat asleep and generating an image using MJ. Interestingly, the AI output resembled a "cat pretending to be sleeping," while the other cat was genuinely asleep. It's all about the standards we set.

10 months ago

etrautmann

Does this policy apply to cover art as well as figures and data as part of articles?

10 months ago

rkagerer

Good for them! I'm glad we're starting to see some pushback against the shady practices vendors of this tech employ regarding their datasets. I hope someone figures out how to maintain and apportion attribution (even if it's as awkward as eg. list the million images that contribute >X% to a given result).

10 months ago

sebzim4500

I would expect that if X was 1 then there would almost never be a single image that contributes more than X%.

So you'd have to make X=0.0001 or something, and then what? Pay them all a fraction of a cent?

10 months ago

malkia

10 months ago

mgraczyk

Second paragraph of the article: "Apart from in articles that are specifically about AI"

10 months ago

dclowd9901

Apropos of nothing, I’m always really encouraged to see companies take strong philosophical stances like this. I don’t think it’s a particularly controversial stance, but all the same, it’s encouraging to know they want to promote integrity in this space and try to set an example.

10 months ago

Der_Einzige

Good luck enforcing any of these bans. AI models are multiplicities (model + lots of sampling, decoding parameters, etc).

In general, it's extremely difficult to prove that anything is AI generated at all. Even more impossible to prove which model was used with which settings.

10 months ago

chasing

Plagiarism can also be tricky to identify and prove. But the reputational harm of lying if you’re caught can be massive and an effective deterrent if you actually care about your career.

I’ll say that even in my personal life if I catch you flat-out lying to me about something I have a very difficult time reestablishing trust. It’s like you’ve revealed that deep down you think it’s an acceptable behavior and now everything that comes out of your mouth has to weighed as possible bullshit.

10 months ago

Waterluvian

It’s not really about enforcement. It’s about saying it’s not allowed. That’s sufficient for many cultures.

10 months ago

CharlesW

> In general, it's extremely difficult to prove that anything is AI generated at all.

It seems like it could be pretty simple — if there's a question, you ask the creator to provide the original RAW and have a conversation about how they got to the final "developed" image. If there's still doubt, they could be asked to duplicate/approximate the process in a screen-sharing session.

I'm not familiar with the current state of content provenance initiatives like Content Authenticity Initiative¹, but generative AI is likely to boost their popularity.

¹ https://en.wikipedia.org/wiki/Content_Authenticity_Initiativ...

10 months ago

LapsangGuzzler

That's a good point. RAW is such a common format in the photography community but somewhat of a silly format for a generative AI to write to based on its file size.

Also, is generative AI capable of dramatically upscaling the quality of it's output relative to its input? I would assume so but I've never really thought about it.

10 months ago

CharlesW

> RAW is such a common format in the photography community but somewhat of a silly format for a generative AI to write to based on its file size.

You could cheat and convert the image to a RAW file, but it'd be very difficult to do so in a way that would fool a forensics investigator.

> Also, is generative AI capable of dramatically upscaling the quality of it's output relative to its input?

If the image output is too small, one could use tools like Topaz Gigapixel AI to scale it up

10 months ago

ibushong

I think it's pretty naive to think that AI tools wont soon be able to create/edit photos in RAW format that are impossible to detect. IMO, the only way to verify authenticity of stuff like this is to have hardware on the device (camera) which adds a signature/timestamp/etc to the data. This of course would require all devices to be in some sort of registry, which then becomes a privacy concern. It's gonna be a mess...

10 months ago

CharlesW

> IMO, the only way to verify authenticity of stuff like this is to have hardware on the device (camera) which adds a signature/timestamp/etc to the data.

Cameras have been doing that for at least a decade, but that's not foolproof either. For example, Canon's Original Decision Data was cracked in 2010. https://photographybay.com/2010/11/30/canon-original-data-se...

The reason it's hard to fake RAW files isn't because one can't convert images to RAW files, but because RAW files contain lots of additional information that would be difficult to fake. For example, RAW files include mosaiced sensor data which has flaws that are unique to a particular sensor.¹ A digital forensics expert can evaluate a hundred aspects of RAW files to see if anything smells fishy.

¹ https://www.labmanager.com/sensor-imperfections-are-perfect-...

10 months ago

colechristensen

I don’t know, a whole lot of generative AI imagery contains obvious artifacts. Just go down to the noise floor of size and intensity, AI doesn’t look like thermal noise in a sensor or real lens artifacts and fuzziness. Not to mention obvious things like mangled hands or other complex structures.

10 months ago

jsheard

There are also second-order giveaways that someone is using AI generation, in the case of photos the photographer would probably take numerous shots of the subject before submitting the best one, and if challenged they could produce the rest of them as evidence that they're the real deal. As far as I'm aware, using AI to generate a plausible series of photos with all of the details being consistent between them is much more difficult than generating just a single plausible photo.

In the case of artwork, the author of even the most convincing, artifact-free AI generated piece will immediately crumble if asked to show WIPs, non-flattened project files or timelapses. I have seen some charlatans attempt to fake WIPs by using style transfer to turn their finished piece back into a "sketch" but the results aren't very convincing, the models aren't trained on the process of creating art conventionally so they're not good at faking it.

10 months ago

Der_Einzige

This is possible today, it's called "reference only controlnet".

10 months ago

golemotron

True. The Copyright Office is going to eventually have to walk back its recent guidance too. Whether they will realize they need to on their own or need to have Congress to act is the only question.

10 months ago

[deleted]
10 months ago

Neilsawhney

So sad, they already broke that rule in the first image they included.

10 months ago

DrammBA

That image was not "created wholly or partly using generative AI". It's merely a photograph that happens to contain an AI generated image displayed on a smartphone screen. Funny how they basically show you how to circumvent the new policy.

10 months ago

tgsovlerkhgsel

That sounds very much like "created partly using generative AI" to me.

10 months ago

cpeterso

The article says “Apart from in articles that are specifically about AI”.

10 months ago

aurizon

This is a rear guard action, in a few months - little more, tech will do an end run.

https://www.forbes.com/sites/danielfisher/2012/01/18/sopa-me...

10 months ago

skilled

> for the foreseeable future

The publication already has a reputation and I don’t think people would judge Nature if they used Midjourney for featured images.

Videos are an entirely different thing, it will take a few more years for AI to be able to create interesting videos, so in a sense it is meaningless to even mention it.

10 months ago

[deleted]
10 months ago

PlasmonOwl

Hahahahah. Nature. Integrity. Fuck me.

10 months ago

[deleted]
10 months ago

varelse

[dead]

10 months ago

activiation

[flagged]

10 months ago

bentcorner

I can tell you didn't read the article.

> Apart from in articles that are specifically about AI, Nature will not be publishing any content in which photography, videos or illustrations have been created wholly or partly using generative AI, at least for the foreseeable future.

Plus in this very article they have a photo containing AI-generated art, but it's done in a way that is obvious - it's a photo of a user using DALL-E with appropriate credit.

10 months ago

activiation

[flagged]

10 months ago

morphicpro

[flagged]

10 months ago

chasing

Is it exhausting having to reframe every single thing through a bizarro culture war lens? Seems like it would be.

10 months ago

morphicpro

I think is more exhausting using platforms and syndication to promote ideologies that claim to cause harm (but mostly only to those who are successful, while also mostly being used by those who are successful) so thus it should be limited or controlled, though AI has the most value for those who they wish to control as it makes tasks accessible to those with out. To me its more like the people in power fighting to keep that power, of which I could give no shits about. Only thing I could say at this point to those who are still putting up a fight, deal with it. This is a matter of making things accessible, not a matter of who has the most talent. They would like you to think they are more worthy of making art than you. I'd be more worried about that.

10 months ago

hooverd

I don't see why Nature is obligated to publish you, free expression or not.

10 months ago

morphicpro

I don't see why Nature is pandering the privileged while also saying that people who get aid can fuck off? When you frame it in a matter of accessibility does that make you feel like an ass for telling people that they must make the grade and their work is not worthy? What kind of inclusive community does that create? Oh can't afford $$$ worth of glass and cameras, get lost. That's all I see here. I'm fine avoiding that community all the same too. Think about how many young people who are poor who have no means to get the gear required to participate, except they have this app that would allow them. But this community has made a clear message to that person they are not welcome. I think thats sad and a real statement unto itself and is a perfect reflections of our current "nature"

10 months ago

nbardy

This wreaks of performative grandstanding.

Dictating the tools that artists use for a commission is punitive and moralizing.

Let the artists decide the morality of their own profession.

10 months ago

CharlesW

> This wreaks of performative grandstanding.

It's a straightforward clarification of their existing editorial policy. https://www.nature.com/nature-portfolio/editorial-policies/i...

10 months ago

hgsgm

Nature is not an art magazine.

10 months ago

morphicpro

[flagged]

10 months ago

swayvil

This medium, text, pictures, video. It's seductive. It's tempting to pretend that it is reality, but it isn't.

I know that's a naive truth and we all know it. But still, we really do pretend otherwise.

I think that might be a bigger deal than we acknowledge. I think maybe our sanity is bent from living this way.

10 months ago

ilrwbwrkhv

Simulacra and Simulation

10 months ago

[deleted]
10 months ago

neilv

> Saying ‘no’ to this kind of visual content is a question of research integrity, consent, privacy and intellectual-property protection.

Evidence that STEM people can think clearly about this, when their paycheck doesn't depend on pretending otherwise.

(Personally, I'm going to be in the latest AI techbro gold rush, but will try to do it responsibly.)

10 months ago