Integrating artificial intelligence (AI) and computer vision technologies is paying off with improvements to campus safety and operational efficiencies, AI experts told MeriTalk in a new webinar in which they advised organizations to consider a deliberate approach to implementation.
Helen Norris, vice president and chief information officer at Chapman University in California, said that computer vision and AI technologies can be used together to manage campus parking spaces, integrate data from disparate school systems, and aid campus police in their duties.
But she also said that at least one hot-button technology policy issue is coming into play.
“I think the area that has the most potential and probably the most challenges is around facial recognition,” said Norris. “Most universities have hundreds of cameras on site as a deterrent measure, and I can imagine a scenario where facial recognition can aid with investigational incidents.”
Norris added that as the use of facial recognition tech remains controversial, higher education institutions will adopt a careful approach to rolling out the technology.
“Universities will move a little more slowly in embracing this use of the technology,” she said, adding, there will be “a focus on policy and ethical considerations before we rush to using the technology in this way.”
Randy Lack, general manager of computer vision AI at Dell Technologies, said that when looking at computer vision and AI, colleges and universities should start by assessing existing systems and identifying clear priorities, such as managing vehicle traffic or enhancing safety.
And he said campuses should aim to build scalable, sustainable infrastructure that shares data across campus stakeholders and improves user experience.
“If you break it down into kind of these little bite-sized chunks of, ‘how do we approach it,’ then it’s much easier to implement, and we can prioritize,” said Lack. “You say, ‘hey, our number one priority is going to be in this area,’ … then we know, okay, that’s the priority. We can go and start to work on that, instead of just looking at it and going, ‘hmm, how can we use AI?’ Because you can get into a lot of stuff.”
Norris agreed with Lack, adding that the initial focus of AI adoption should be on addressing privacy concerns and developing policies.
“AI is driven by data … computer vision technology is going to bring in its own data, but also data from other campus systems, so ensuring that you know where your data is and that the quality of the data is high will lead to greater success,” said Norris. “Clear policies about who or what tools can access various data types, whether it’s images or card swipes for physical access or data from our student information systems – that is just critical.”
“Looking down the road, I suspect that we will be thinking about how to use these tools, not just with our own data, but also with data from our partners, including local police departments and local cities, that again, will drive the discussion about appropriate use of the data and appropriate sharing,” she added.
Lack emphasized finding capability and data gaps and addressing them when first getting started – and warned that ignoring those may keep campuses behind in their implementation.
“Don’t be afraid to have these honest discussions about where you might have gaps, because if you kind of just deny, ‘well, it’s probably not an issue,’ that’ll become an issue for sure, and it’s so much better to just put it out there and say, ‘hey, I think we’re gapped here in our security, or we’re gapped here in this in this area,’ and then approach it,” he said.
Watch the full webinar here.