Ariadna Navarro
Chief Growth Officer at VSA Partners
Chicago, United States

"AI tools still rely on creativity more than they threaten to replace it.": Ariadna Navarro, VSA Partners

 

VSA Partners
Full Service
Chicago, United States
See Profile

 

As we dive further into the world of AI, it's essential to keep human input at the forefront of automated technologies. Ariadna Navarro, Chief Growth Officer at VSA Partners, weighs in on responsibly adapting to new technology and setting ethical guardrails for how we use AI.

 

Does your agency encourage or deter the use of AI in your work? If applicable, how does your team integrate these tools into the creative process?

It’s a little of both right now. We encourage everyone to use it, but with guardrails and guidelines to keep the work honest, human, and original. At the moment, AI is best suited to things like exploration, evaluation, and experimentation. It can reliably accelerate existing processes — from research and analysis to idea generation and content creation — but it’s equally susceptible to misinformation, redundancies, and both legal and ethical issues that we’re only beginning to understand. 

In other words, AI is an exciting, new option in our toolkit, but it’s nowhere near a replacement for any of the ways we work yet.

 

How does the accessibility of these tools affect the way it is used?

The accessibility factor is core to AI’s success. It’s incredibly rare for a tool to possess both the low barriers to entry and the near-infinite possibilities that AI represents. The open format and widespread availability of this generation’s AI tools have given them access to an unheard-of volume of perspectives, permutations, and information that’s driving its rapid evolution, but that also comes with increasing risk.

While the exponential growth and innovation of these nascent phases is exciting, it’s also why we can’t afford to lose any more ground in understanding and safeguarding against the dangers and threats it could pose.

 

As AI advances, how is the role of the creative redefined? In what ways do you see the landscape of creation changing/shifting in response to AI?

As a tool, AI represents incredibly exciting ways for creatives to augment their technical capabilities and process efficiencies. That said, while skills like generating code or compiling information can help creatives mockup an example or establish a content framework, every interaction still needs human input. At least in their current form as information aggregators — as opposed to true “intelligence” — AI tools still rely on creativity more than they threaten to replace it. Just like any form of art or media, the more creative and original the input, the more exciting the output.

Part of creativity is fueled by innate ability, but it’s also influenced by the sum of one’s life experiences - that’s still something AI can’t replicate. 

However, as these aggregators increase the speed at which they understand new concepts, learn more about us, and capture the cultural zeitgeist, it’s likely to establish a sort of self-sustaining cycle. And as trends and demands continue to turn over at an increasing rate, we’ll rely even more on AI tools to keep up with the pace they’re driving, and on, and on.

Still, for all of AI’s excitement and possibility, this type of “threat” is a familiar one in the creative field. Nearly any time the pendulum swings in one direction (3D printing, virtual reality, streaming content) human nature inevitably reaches back for the simple, analog, foundations those new technologies are born from (handmade furniture, going “off the grid,” vinyl records).

So, while the tools and outputs may evolve, I imagine the fundamental elements of creativity will be just as important in whatever form that future takes.

 

If AI furthers its capability to create and think, what is a responsible way to use these new technologies?

We’ve set ethical guardrails for how we use AI. Transparency (about the role it plays in our work), boundaries (around what we can use it for or can’t and what it can/can’t do), human governance (accountability from humans not AI) & employee education (constant iteration and experimentation).

Agencies should already be critically reviewing their work and processes for things like accuracy, ethical standards, bias, and intellectual property conflicts. That’s no different with AI, or whatever follows it. Even though issues like bias in AI aren’t always obvious until you look for them, it’s unacceptable for us to deflect accountability for the actions of a machine we’re using. Especially considering the tendency to ignore threats in the rush to adopt new technologies, we need to start building the guardrails now, while things are still relatively fresh. 

We feel however, there is a sense of urgency in thinking about this now, rather than after the damage has been done like we’re doing now with social media or data privacy. We need to engage behavioral scientists, social anthropologists, cognitive scientists, etc. to define guardrails from the get-go to at least attempt to prevent human obsolescence.

 

Share on Linkedin
.

Create a free Talent profile and become a member of AdForum

Get Started