The Department of Defense's Project Maven, commenced last April, utilises the Silicon Valley search giant's TensorFlow AI to analyse the hours of footage shot by unmanned planes.
TensorFlow scans the film for objects of interest and flags them for human analysts with a view to further investigation.
It has reportedly already been used in the field to survey areas held by Isis in the Middle East but Google stresses the technology is being deployed for "non-offensive uses only."
Employees have nevertheless raised concerns about the company's role in defence contracting after it was revealed last week on an internal mailing list, particularly in light of the company's famous founding principle: "Don't be evil."
Many have expressed disquiet internally about the software they helped develop being signed over for surveillance, according to Gizmodo.
"Military use of machine learning naturally raises valid concerns," Google said in a statement.
"We're actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies."
The company has worked with the US Military in the past and senior executives including Eric Schmidt and Milo Medin have advised the armed forces on cloud and data systems as part of the Defense Innovation Board.
Google also oversaw the development of the BigDog robotic packhorse, built by Boston Dynamics when it was owned by parent corporation Alphabet in 2015. Originally conceived in 2005, the stalking quadruped was repurposed to assist the Marine Corps before ultimately being abandoned on the grounds that it was too noisy for stealth combat manoeuvres.
The Pentagon spent $7.4bn (£5.3bn) on AI and data processing tech in 2017, according to The Wall Street Journal, as global warfare becomes ever more remote and tech-centric.