Mercy Students and Faculty Explore AI Opportunities and Risks

graphic design showing people inside a computer representing AI

The Mercy community tuned in on April 15 for a thought-provoking discussion about artificial intelligence (AI). The Mercy University School of Liberal Arts (SLA) presented an event titled “Your Future with AI: Roundtable Discussion with Industry Leaders,” featuring experts who discussed AI’s opportunities, risks and ethical considerations. The panel discussion drew a large audience of approximately 75 attendees who asked thoughtful questions and continued the discussion in the chat. Events like this one empower Mercy students and faculty with insights and perspectives that will help them navigate their education and careers.

The panel consisted of three leaders working at the forefront of AI innovation: Scott Bounds, senior director of Azure Data and AI at Microsoft; Dustin Rainey, senior director of strategy and innovation at consumer data company Acxiom; and Jason Snyder, global chief AI officer at experiential marketing agency Momentum Worldwide. Moderators Damien Germino, associate dean of operations for the Mercy University School of Nursing and adjunct faculty, and Jade Snyder, assistant professor of communication arts, guided the conversation.

After sharing examples of how their companies are currently using AI, the panelists discussed how various industries are beginning to integrate AI to increase productivity and improve outcomes. In healthcare, for example, some clinicians are using AI to help them see patterns in data and make better decisions. The panelists agreed that people should own their own health data, though sharing that data may help researchers find cures or treatments for diseases. AI has the power to democratize healthcare, but the panelists raised concerns about the business model and the risk of AI being used as a premium service, potentially leaving the underserved without access. 

AI is impacting the job market already, especially hiring processes. With the help of AI, it is easier than ever to put together a résumé. However, AI is also making decisions about job applications before human hiring managers ever see them. Snyder recommended that students looking for jobs clean up their digital presence but preserve their unique voice, which is what makes them human.

The importance of consent came up several times. Bounds mentioned that people generally accept that their data is being shared but still want to be able to protect themselves. Snyder went a step further by emphasizing that consent is broken and must evolve. “We need systems that respect everyone’s agency from the beginning,” he said. “In the end, this isn’t just about legal compliance. It’s about designing a future where people matter more than the machines that they’re training.”

The panelists advised faculty to let students use AI and learn alongside them. Snyder encouraged faculty to help students interrogate AI, asking where its answers came from and what assumptions it made. Rainey also suggested that students try new tools like Lovable that use AI to enable non-engineers to build websites and apps.

Rainey likened the present to the Industrial Revolution when technology disrupted traditional methods of working. “If the task is to plow a field, I'd much rather have a tractor than a shovel,” he said. “We have to think about AI in that sense. You need to start retraining yourself to learn this new tool.”

The panelists are all self-professed “AI optimists,” though they do express some concerns. Bounds, for example, worries that people may come to rely on AI too much. “My biggest concern is the skills that people aren't getting,” he said. “Think about all the grunt work you have to do to become a partner in a law firm. If you're not doing that work and instead you're relying on AI, how do you get that elevated position where you have all that understanding and that skillset?” 

As AI continues to advance, Snyder believes that there must be diverse voices in the room to make sure AI is developed and used ethically. “Ethics and fairness are not going to come from hope,” he explained. “They’re going to come from designing governance into the architecture from the start. The only voices shaping AI today are technologists and shareholders. We need to avoid exploitation by building systems where power is not centralized and where people can still say, ‘This far, no farther.’”

Throughout the discussion, attendees asked thoughtful questions in the chat such as whether data can be used if users did not give consent and the ethical implications of promoting AI as augmenting human thought. 

Snyder closed by encouraging everyone to keep speaking up and questioning how AI is being built and regulated: “Are we using AI to serve the person (the user), or are we using the user to serve the system? That's the ethical inflection point that we all need to struggle with. If we don’t get serious about it, it won’t shape what we’re doing but who we become.”