Video transcript
NSW Premier's Debating Challenge 2023 - Years 11 and 12 state final

Back to video Back to NSW Premier's Debating Challenge finals – secondary

[intro music]

JUSTINE CLARKE: I'd like to acknowledge the Gadigal people, who are the traditional custodians of this land that we're speaking on today. I'd also like to pay respect to Elders, both past and present, and extend that respect to any Aboriginal and Torres Strait Islander people with us today. As we listen to the voices of these young people speaking today, we remember that Aboriginal people are our first storytellers and keepers of the oral tradition, and a reminder of the ongoing need for their voices to be heard.

My name is Justine Clarke, and I'm the speaking competitions officer for the NSW Department of Education. Thank you for coming here today to witness the 2023 final of the Premier's Debating Challenge for Years 11 and 12 for the Hume Barbour trophy. Our chairperson and timekeeper is Luka Miletic, and our timekeeper is Eric Scholten, both from Sydney Boys High School. I'll now hand over to Luka to take over proceedings. Thank you.

[applause]

LUKA MILETIC: Thank you, Ms. Clarke. I welcome you to the 2023 state final of the Premier's Debating Challenge for Years 11 and 12 for the Hume Barbour trophy. This debate is between Cammeraygal High School and Sydney Girls High School.

The affirmative team from Cammeraygal High School is first speaker, Josh Herridge; second speaker, Noah Rancan; third speaker, Alexia Rigoni; and fourth speaker, Jared Atherton. The negative team from Sydney Girls High School is first speaker, Sofia Malik; second speaker, Miri Stubbs-Goulston; third speaker, Anhaar Kareem; and fourth speaker, Sofia Tzarimas. The adjudicators are Ellie Stephenson, Jeremiah Edagbami, and Micaela Bassford.

Ellie Stephenson graduated with a Bachelor of Arts and Advanced Studies, majoring in political economy and environmental studies, with first-class honours from the University of Sydney in 2022. She was previously a high school debater, and was a member of the Combined High Schools Firsts Representative Debating Team, as well as winning this competition with her Smiths Hill High School in 2017. Ellie was a member of the NSW representative team and won the National Schools Debating Championship that same year. She is a 2-time World University Debating Championships semi-finalist and was a top 10 speaker in Australasia in 2021.

Ellie was the chief adjudicator of the Australasian Championships in 2022 and 2023 and was ranked best judge in Australasia in 2023. She is also the deputy chief adjudicator at the upcoming 2024 World University Debating Championships. Ellie currently works as a correspondence officer in education and skills reform for the NSW Department of Education.

Jeremiah Edagbami is currently studying law and sciences, majoring in medical biotechnology at the University of Wollongong and recently completed his law clerkship at Clayton Utz in Sydney. He was chief adjudicator at the Australian National Universities Championship earlier this year and was a top 30 competitor at the World University Debating Championships in 2022. Jeremiah has served as board director and secretary for the Australian Debating Council since 2021. He was awarded the International Business Law Medal by the University of Wollongong School of Law in 2022 and has won a total of 9 university debating competitions during his time there.

And lastly, Micaela Bassford was an accomplished debater and public speaker as a student attending Kirrawee High School. She was a state finalist in the Plain English Speaking Competition in 2010 and 2011 and a state finalist of the Legacy Junior Public Speaking Award in 2007 and 2008. She was also a state finalist at the Premier's Debating Challenge for Years 9 and 10 in 2009 and a member of the Combined High Schools debating team.

Micaela has adjudicated both state and national finals for the Plain English Speaking Award and the Legacy Junior Public Speaking Award, as well as state finals for the Multicultural Perspectives Public Speaking Competition. Micaela holds a Bachelor of Economics with first-class honours and a Bachelor of Laws from the University of Sydney. She is currently assistant director at the Australian Competition and Consumer Commission, working on the ACCC's childcare inquiry.

Welcome, adjudicators. Each speaker may speak for 8 minutes. There will be a warning bell at 6 minutes with 2 bells at 8 minutes to indicate a speaker's time has expired. A bell will be rung continuously if a speaker exceeds the maximum time by more than one minute. The topic for this debate is that Australia should ban facial recognition systems. The first affirmative speaker Josh will begin the debate.

[applause]

JOSH HERRIDGE: When a photo of someone's face is recognised through a system or database, it is a violation of their rights to privacy and has the harms to potentially exploit them and bring up their past, exposing them to scrutiny or bullying. Four parts to this speech. Firstly, some setup, then a principled argument on why people's privacy is being violated through these systems. And true practical harms, firstly about the data breaches that could arise from this, and also the potential for image-based abuse is greatly increased through the use of facial recognition systems.

So firstly, onto setup. What does facial recognition look like in this day and age? Firstly, when photos of your face exist in a database, and they can be scanned by private companies or governments, and that data can be matched up to other related data points to find either another photo of that person or find information related to this person, this is what facial recognition systems do.

For example, this looks like face ID on phones being used to unlock technology on laptops or phones. And this also looks like reverse image searches on Google, where a person can put in an image of someone, and related images of this person's face will show up showing all photos of them that they have uploaded online. This also includes facial recognition attached to CCTV cameras where other photos or images of you can be uploaded online through this.

And note that this doesn't include proof of identity, such as a driver's licence or photo ID card, which are fundamental to the operation of society. But because these are not-- because driver's licence and photo ID cards are not the mechanism through which faces are recognised, we are instead banning the software that allows for this to be done. And note that this technology is widespread throughout society and is increasing in its scope at a rapid pace. I'm going to give you the example of Bunnings and other Wesfarmers subsidiaries which recently implemented facial recognition software into its CCTV systems without informing its customers.

Moving on to the first principled argument that people have the right to their privacy and this facial recognition software ultimately violates this. Now ultimately, people have an inalienable right to their own images and how they should be used. This was very intuitive in this debate. This principle was very intuitive in this debate. And we thought it was justifiable because the images of individuals, when spread, can be used to impact their livelihoods, damage their reputations, and impact them quite significantly in negative ways.

So because it was intuitively your data, it was an image of you, it was your photo, you had the right to protect it and to determine what people could do with it. And so it was your privacy and your right to privacy being violated by the images, or whether or whether not they were actually used for anything, because in most cases, we thought that they would actually be used because there would be no reason for companies or governments to actually hold this data in the first place if not to use it. But simply holding this data was so harmful and enough to violate someone's right to privacy because of the potential risks that it posed to them.

Now further on this, it is important to note that in this case and the specific case of facial recognition where people aren't always informed of that it's being used, and that images are being taken up, and that they're being compared to larger databases, that people aren't actually able to give informed consent on how their images are being used. And we thought that this was true for 2 main reasons in this debate. Firstly, the extent to which images are used is unknown to the person.

Whether the images are being shared, whether they're being put onto larger databases, the potential for the systems in the future, the capabilities for them to be used in the future, the scope of the software was completely unknown. And the database size that these companies or governments held of facial recognition data was completely unknown. And so people aren't able to give informed consent to their data being used in the first place. So even if there was a reason why people would want to hand their data over, they still don't know and they still can't give informed consent as to why they would be protected and why their privacy would be protected in this case.

And ultimately, if your face ID has been used-- if you logged on to a laptop or a phone, and your facial ID has been utilised, and the images have been verified, it could be shared or spread across a wider database within the company. It could be used so that employees from that company can look at it, can verify it. They can take note of it. They can make a profile of you. And this makes people unable to give informed consent because they do not know the scope of the harms or the risks of this system.

And moving on to the second mechanism here, people don't always know when their facial recognition data is being used. For example, when Bunnings decided to implement facial recognition systems connected to the CCTV cameras in their stores, they did not inform a single customer. People were not notified of this.

And so we see that people can't, because of this, as well, give informed consent. But they also are completely unaware as to when this is being used. And so, ultimately, people's privacy is being harmed as a result of this.

So at the end of this point, it's important to note that individuals had a right to their own images because it was innately their own property. It belonged to them, and that facial recognition systems deprive them of this right to privacy. And note that this increases the burden on the negative team here as they now need to not only prove that they have benefits under their case, but these benefits outweigh the principled responsibility that people have to privacy.

So moving on to the first practical argument here that image-based abuse is increased significantly with facial recognition systems in place. Now firstly, what is image-based abuse? It can involve blackmailing, when someone's got an image of you and they try and blackmail you to get something out of you. They could extort you for money by exposing it-- by potentially exposing an image of you which they've gained. They could also potentially enact physical abuse or verbal abuse over someone, or harass them online and attempt to damage their reputation or persona in that way.

And when does this normally occur? It normally occurs when an image of an individual, which could often and is often sexually explicit and is used to extort them for money, or to get them to engage in some unwilling acts. And so under facial recognition software, why is this actually more likely to happen? Well, we thought that if someone has a photo of another person that includes their face, under facial recognition software and technology, they would now be able to upload this photo to a reverse image search, for example, and gain other photos of this person. And they have not only just other photos, but every single photo of this person that is online that has their face in it.

And so when facial recognition software doesn't exist, there is only one image that a person can use to extort someone or to blackmail them. But when facial recognition software is in place, it means that they now have access to a much broader range of photos. And this is ultimately going to contribute to much larger harms on the individual because they can be extorted with a wider range of images.

So the impacts of facial recognition technology were much more harmful in regards to image-based abuse. And it means that people can also, through facial recognition software and through reverse image searches online, they could find out what school someone goes to. They could find out their sports team. They could find out the job they have, potentially where they live. They could find out their daily habits and routines.

And this was really harmful to individuals in this case, as it meant that the potential for abuse was much higher. And that leads to increased psychological harms and also potential harms from physical harassment. And not only is this the risk of someone directly being harmed through exploitation or through harassment, but just the risk and the psychological harm that it does to them knowing that people have the potential to use these images in such a way, knowing that an image that they've posted a decade ago could still be found online through facial recognition software and pose a potential harm to them in the future, knowing that this is never something that's going to go away.

So at the end of this, we saw that this harm was so significant in this debate and was increased so much by facial recognition software that image-based abuse was going to increase-- was going to heighten so much in this debate that it was going to harm individuals through images of the past and present, as well. And this was something we could not stand for.

Now on to the second practical argument that data breaches in privacy-- and data breaches and hacking were also going to be a problem in this case. And ultimately, a data breach is when someone can get information through hacking or through going through some sort of software loophole. And they can see all of the data that is involved in this. Not just certain images, but they will likely have access to a broad range of faces and, as I mentioned before, the connections and the data points that align with these faces.

And if we have the most technologically-advanced companies in the world unable to prevent data breaches, the ones that are specified in hacking to prevent it, [bell dinging] they are unable to prevent these things, why would companies and governments be unable to prevent this in the first place? So we thought this was a genuine harm to this case. So we thought that data breaches were harmful generally because criminals could impersonate you.

Photos could be used for further image-based abuse. And deepfakes could also be created, as well, contributing to image-based abuse. And this was a clear harm in this debate that we should try our best to mitigate because people could use these images that they have gained through hacking to impersonate people, to make them have financial troubles if they use their image to log on to their banking account, to scam them to damage their reputation.

And this would be prevented if we just didn't use facial recognition software to begin with. If we stuck with thumb prints instead, which they are just as secure, they're personalised. They're unique.

You can't have a twin impersonate you with a thumbprint. And they also don't have the same potential harms because someone can't go up to you on the street and use your thumbprint to exploit you. And so note that this material pre-empts a lot of what the opposition would say as there is simply no necessity for facial recognition in this case.

So at the end of this, we see that the principled harms and the practical harms of using facial recognition is simply too high. And we should not use them. Proud to affirm. [bell dinging] Thank you.

[applause]

LUKA MILETIC: The first negative speaker Sofia will begin their case.

[applause]

SOFIA MALIK: In a world where the government had access to things like your search and purchase history, and we found it completely natural that our faces were recognised and recorded at the airport as part of the social contract which kept us safe, it was unclear why facial recognition was so uniquely different that we ought ban it. That was side affirmative's burden in this debate. I'm going to be doing a couple of things in this speech. First, on set-up, to run a couple of claims, first of which being why facial recognition will be conducted in a good and advantageous way; and second, on safety and security responses will be integrated.

First on set-up, what does this tech actually look like? I'm going to be splitting this into what it looks like now versus what it is likely to look like in the future under our side. First thing to say is that facial recognition is generally used for security. That's to say it's for personal reasons like when you lock your phone, using Apple Wallet, stuff like that.

Globally, it's becoming an important way of convicting crimes. And it's also generally used at airports. That's common practice. And that's to say that the main uses of facial recognition technology in the now is for safety and security.

Note that the examples that opposition gave us were equally as invasive on their side. For example, social media accounts, employers were still able to look through your social media accounts if they wanted to find something incriminating on you. And often, these sort of breaches that they talked about, things like deepfakes, often happened as a result of someone you knew having an incriminating photo of you and selling it to someone who could use it in an incriminating way.

So what does this look like in the future? We think that, first, this looks like expanding facial recognition tech. That means that it becomes more efficient and effective. It also becomes more accessible so it doesn't cost a lot.

We think this looks like something like the defence in a criminal justice case being able to access facial recognition technology just as they would easier be able to access DNA technology when DNA is more widespread and democratised. We think that also it means that you get better quality. That's to say that it's more effective and less fallible, which means that it doesn't make mistakes. And we also think that you could probably expand it into other fields, making it more socially utile, in general.

Second thing to set up here is what is the role and responsibility of the government in the situation? In our current world, the social contract essentially says that you, as an individual, need to give up things in order for the state to do a good job of protecting you. So the government collects data to this end.

That means that they do things like monitoring your search history to decide whether you need to be put on a watch list for searching up something that threatens national security. It means that they track your purchases, for example, if you purchase a weapon. It means that you are subject to searches at the airport that are equally as invasive as what the opposition is trying to tell you facial recognition technology does.

Note that airports also do use facial recognition technology, and that is used to the end of the social contract of keeping you safe. That means that the opposition's burden in this debate needs to-- opposition needs to stand behind these invasions and also needs to prove that facial recognition is, 1, uniquely harmful, and, 2, that this harm disqualifies it from the government's current use of data collection.

So what does our world look like and what do we stand by on side negative? First, we stand by pretty heavy regulation. Companies under our side must be transparent. And therefore, punishments must be implemented, for example, a company not abiding by facial recognition technology conduct laws.

So under our world, the government has a lot of control over facial recognition. They're better able to regulate it. And this sort of tech is largely democratised.

In their world, we think it's incredibly likely that companies still use facial recognition technology under the radar and just don't have to disclose it. This looks like TikTok using facial recognition technology and just saying no at press conferences when they're asked about it. This means that you get less transparency. Or best case, where companies don't use it at all, which is the opposition's best case, we still don't like that under side negative because we don't get the type of innovation and effectiveness that we're looking for, which can be used to the end of security.

So 2 claims in this debate, and then I'll go over paths to victory. First is that the use of this technology will be done well. And second, that it's going to be more effective of achieving safety and security. We think our paths to victory in this debate is, first, that we actualise the government's responsibility to protect citizens, and, second, that we achieve better safety and security outcomes for people.

On the first claim of how we think this technology will be used in a correct and advantageous way, a couple of pieces of characterisation, first on the public; second on the government. First, we think that the government is both, A, largely sceptical and risk-averse. That's to say that they have an international awareness.

They probably think that things like the Chinese social credit system are quite worrying. There's a lot of moral panic about things like AI and deepfakes. And there's also a lot of fear over data breaches being largely quite scary. That's all reasons to believe that people are likely to be risk-averse.

But, B, we think they're also willing to trade off data for the greater good. But not only the greater good; they're also willing to trade it off for just simple convenience. That's to say that when you get a social media account, you're largely aware that your data is going to be sold. But you consent to it anyway because it's convenient for you to use social media.

It also means that when you travel, you consent to there being CCTV and facial recognition done on you because you understand that it's probably for the greater good. You also do this when you consent to things like recorded calls. All of those are reasons to believe that the government is probably accountable. If people are largely sceptical and risk-averse, they're better able to hold the government accountable to the degree of regulation that they do.

Second, on characterising the government, in Australia, we live in a democracy. That's to say that the government is largely accountable to the whims of the media and elections. Those were all reasons why they were likely to conduct facial recognition in a way that was accountable to the way that the general populace wanted them to use it.

So under our side, we allowed the government to regulate facial recognition technology. They could conduct reviews of it. And at our worst case-- at our worst case, we still achieved things like transparency. The government was able to institute things like fines for violating facial recognition laws.

On the comparative, when you got rid of facial recognition technology, it meant that you couldn't regulate instances where companies did it anyway. It meant that people trusted the government and companies less. And it meant that corporations often got away with it more, and in a much more invasive or sinister way.

These were all reasons why it was likely that the government was going to invest in this technology in a good way. It meant that they had other countries to model against. And they were at the whims of the social scrutiny that they were going to receive. They could do things like in making regulatory bodies or in passing things for regulation, they had to create things like committees of experts. That was going to be advantageous for the creation of these laws.

Second claim on safety and security, just to set up how this is currently-- how this would be used, we think, first, in a criminal justice setting, it means that facial recognition would be used to make criminal trials more accurate-- criminal proceedings more accurate and more resource-efficient. Under that first bit, we think that we get more accurate evidence than witness accounts.

We think that reasons that human error happens is generally people sometimes have an interest to perjure themselves on the stand. Witnesses had scrappy evidence or didn't see the thing happen. And generally, the adversarial system is quite bad at presenting evidence via witness statements that is good and infallible.

So we think that it poses a better checking mechanism for criminal trials. Benefits of that are just pretty intuitive and pretty high-impact. That's to say that you catch the correct criminal. It's good for the victim, and it's good for justice.

Second bit on how it's more resource-efficient, we think we minimise the wastage of court resources. [bell ringing] That's to say that you don't have to wait for justice to be achieved. We think that the more impactful impact of that is just that people perceive justice differently.

So, one, it's a deterrent. If you know that facial recognition is around, you're not going to commit a crime because you will get caught for it. The second is that you're more likely to feel safe as a victim or potential victim in society.

So note that this was the explicit function of the social contract. It meant that people felt more protected by the state as a direct result of the government having access to their data. And it meant that you were more likely to come forward as a victim and more crimes would just get prosecuted.

That was a huge impact for us. It was something the opposition needed to engage with. Note that any one of these applications could win us this debate because it just had such huge implications for justice.

Second, quickly on security, security mechanisms still exist on side affirmative. They're equally invasive, just not as efficient or effective. So things like fingerprinting and CCTV footage, so note that we keep all of the benefits of deterrence and safety on our side with mechanisms that are equally as invasive as things like fingerprinting. [bell ringing] [inaudible].

[applause]

LUKA MILETIC: The second affirmative speaker Noah continue their case.

[applause]

NOAH RANCAN: The opposition's arguments today have been based around a failure to understand the unique harms which are associated with facial recognition technology and which do not exist with other technologies that have been used for significant periods of time, such as DNA and such as fingerprint recognition. So in order to characterise the arguments that have come before this debate thus far, they can be split up into the principled and practical arguments. I'll start first with the practical arguments.

So the opposition have largely centred their case today around how their model of supporting these facial recognition technologies can support safety of citizens through recognising criminals and preventing crime. As I said just before, we would say that for significant periods of time, the police have used technology such as fingerprint recognition, such as DNA.

And we know that these are effective. We know that these were used, if we take an extreme example, post-September 11 to find people like Osama bin Laden. This was all done without any need for facial recognition. This was all done with DNA samples, and with fingerprints, and other technologies that they had available.

And what we know is that these mechanisms are less harmful because if someone gets a hold of a fingerprint, they aren't really able to match that to someone unless they have access to a huge database. They can't just recognise someone in the street. It's nowhere near as easy without specialist technologies that police have and that other law enforcement have to get a hold and use any of these mechanisms.

We would also say that these mechanisms are indeed more secure because of the difficulty in changing one's fingerprints. We know that in terms of one's face, things plastic surgery can falsely trip off these recognition technologies. We also know that identical twins have been known to trip up these technologies. So we know that mechanisms like fingerprints and like DNA are more secure because there is no way of changing them, and because twins, they don't share the same fingerprint and they largely don't share the same DNA.

They also went on about how their model proposes significant regulation and how they support companies being transparent. Now as my first speaker alluded to, and as I will get back onto later in this speech, that is only one aspect of the harm. One aspect of the harm is absolutely what these companies will do by themselves. And they claim to have addressed that.

But even if their model does address that, which we say it doesn't, this does not account for the significant harms that we've brought up surrounding the possibilities of data breaches, which is arguably one of the most important and one of the greatest negative impacts that these technologies may have. They also went on about how this can work to deter crime. We would again say that significant research has gone into crime and how to deter crime. And the causes of crime and the reasons why people still choose to commit crimes is because of optimism bias and because they don't think that they're going to get caught. Regardless of whether or not you have these mechanisms in place, people still don't think they're going to be caught, so they're still going to commit crime.

They also went on about how people are now more likely to come forward because this is all available. And again, in terms of this, what we would then see is witnesses, rather to the contrary, being more afraid to come forward because all of a sudden, the police-- and under cross-examination, they can be put under more scrutiny. They can be-- that these mechanisms can be used to see exactly what they've done. And that can lead to reluctance in coming forward. We already know this sort of reluctance does exist in assault cases. And we would say that this is only going to add to this.

So at the end of this, we have proven that, practically, the use of these mechanisms has-- they have no unique benefits. There are no unique benefits to being able to use these technologies because, in terms of just circling back to their point on crime, our model does still allow for the use of closed circuit television. It does still allow that.

It only comes in when that CCTV is working to identify and put a name to someone's face. So we would still have CCTV footage used in courtrooms. We would then just have witnesses matching the CCTV to the accused.

And so we would say that on the whole, their model poses no unique benefits. The use of these facial recognition technologies has no unique benefits. But as we have proven, they have unique drawbacks that are exclusive to these sorts of mechanisms that don't exist in other forms of recognition, such as fingerprint recognition which can be used on a phone as well as for law enforcement. It has no benefits, these facial recognition. So seeing as they have not been able to prove any unique practical benefits to their model, they cannot win this debate.

Then moving forward onto the principled argument, they centred a lot of their argument about how individuals have consented to using facial recognition. And we would say that, first of all, on many people's devices when they sign up for it, most people don't read through the terms and conditions which, yes, superficially, they have consented to it, but they haven't given any sort of informed consent. We would also say that the example that we brought up before of Bunnings and other Wesfarmers subsidiaries, the only notice that they gave to consumers was small terms and conditions written on boards as people went into the stores.

Now under no circumstance is everyone going to read that board every time they go in to see if it's changed. They did not tell people that it had changed. And so people were unable to give informed consent or really any form of consent in those instances.

I would also say that in terms of people using these sorts of technologies in public, there are very few mechanisms, again, through which the government can seek out consent for people using these mechanisms. But we would also say that if people didn't consent to these sorts of technologies being used in public, which we know people don't because of the large backlash that we had when Bunnings and Wesfarmers implemented these facial recognition softwares-- even if people didn't consent, which we say is highly likely given the harms that they have, they don't have an alternative.

They don't have an alternative to living, going out into the street. There's no alternative that they wouldn't be tracked in. So they don't have an opportunity not to consent.

And finally on the topic of consent with regards to what we were bringing out before on how this can be used to find images of people in the past that they've-- of people's images from their past that they have uploaded, yes, someone, when uploading those images in the first place, did consent. But at the same time, they did not consent to the possibilities of what may happen to those photos in the future. They did not consent to the possibility of these facial recognition technologies being used to then go at these photos and find extra information about them. So on that, again, even if we don't win that principled argument, which we are comprehensively winning thus far, it doesn't matter because the opposition have failed to prove that there are any practical benefits to their model.

So finally [bell ringing] on to my main bit of substantive being data breaches. As our first speaker brought up, we've seen companies such as Optus, Medicare, Telstra, companies that tend to be at the forefront of security falling victim to these attacks over the past year. And we know that if these companies were to have had facial recognition technologies available to them, this would have been uniquely harmful to people who were hacked. And we know this because of a lot of things that banks require people to do, require them going into branches.

And we know that if criminals now have access to this facial identity, they can go into banks impersonating the people. They can use masks. And they can impersonate that person, which they are presently unable to do without facial recognition technology. And this is uniquely harmful because a lot of criminal activities-- large transactions in banks require you to go in store. Thank you. [inaudible]

[applause]

LUKA MILETIC: The second negative speaker Miri will continue their case.

[applause]

MIRI STUBBS-GOULSTON: The opposition lost the debate today because they were unable to prove why the use of facial recognition technology was so uniquely harmful that it required the relinquishment of both public and private security and safety for the Australian public. Note we provided you with numerous reasons why facial recognition was a useful tool for protecting the public, which I'll go on to expand on, when they gave no inherent public practical harms of facial recognition.

So we have 2 main paths to victory in this debate. The first one being why do we get better outcomes or at least symmetrical outcomes for individuals who are participants in the private use of facial recognition technology? And our second path to victory is, why does the government have a responsibility and need to provide safety and security to the people through facial recognition technology?

So just for the first path to victory here, the opposition-- so basically, the opposition comes out and tells you that users are inherently unable to consent to use of facial recognition technology in things like private social media companies, et cetera. So what are our responses to this? Firstly, we tell you that there is a heavy incentive for some kind of facial recognition data.

And there is-- sorry, there's a heavy practical benefit for the use of facial recognition data by private firms that are not always for just strictly invasive reasons. We tell you that facial recognition can be used for the protection of user data, for example, the use of Apple face scans to unlock phones, stuff like that. And we also tell you it's also a heavy provider of things of convenience, for example, Facebook tagging people or collecting things based on face, or Apple system where you can look at certain photos grouped by face, like these kinds of recreational benefits that the opposition never really accounted for. And we think that these are the kind of benefits where people agree to maybe forego a small privacy benefit for a system that is recreational.

But even more than that, because of these unique benefits that we get, why do we actually get more transparency of these private companies under our side? We tell you that because technology is so heavily ingrained in algorithms, we think that companies have a direct incentive to hide it if we were to institute the opposition's model. That is to say that looks like despite countries doing many investigations into the TikTok use of facial recognition in albums, they still remain buried and largely unaccounted for because they are so uniquely hard to regulate.

We note that the countries that have the most success in protecting privacy are countries like Europe, which have, under their regulatory Data Act that enforces transparency, rather than a blanket ban of facial recognition technology. And through that, maintains a close relationship with these kinds of private companies. On the other hand, countries like Brazil who have attempted to fully curb the use of facial recognition technology have had an actual lot less success with the material benefits of this.

Why is that? So we think the first thing to say here is because with facial recognition technology, was it to be misused, there is no clear victim like there would, for example, something like bank fraud where you know you've lost money because the use of facial recognition technology is so subversive and hidden within these algorithmic systems. And because there's such heavy incentives to use, and because, like under their side, you get such heavy penalties for continuing to use, what is just going to happen here is the hiding of facial recognition technology by these private firms and the diminishing of the relationship between countries and big firms because to maintain facial recognition in every way, they're just going to try and make it as secret as possible, maintain zero transparency.

At least under our side, we give you regulatory frameworks where at least the user understands how the face is being used. We saw this within the data privacy systems in Germany. But then when they come and tell us about things like data breaches, for example, in private companies, they never really characterise what this actually look like, because let's say, yes, a malicious group now has your face, but what exactly are they likely to do with that?

They never stand up because under their side, they still get things like names, credit card details. But with those data points that these companies are likely-- that these malicious groups are likely to collect with the leaking of data, could they not just look you up because everyone's face is on the internet in some capacity? So they never really mechanise how this is going to be uniquely harmful.

But let's take the opposition at their best and assume that facial recognition ban wholly works and companies stop using them. What kinds of data collections are companies going to use instead? We think that companies like Apple are going to revert back to fingerprint systems, which we think notably Apple in this debate is a very prevalent example for a reason because it's the most pervasive use of facial recognition technology.

So we think that, 1, not only is this is a massive inconvenience to Australian consumers, but, 2, the data leaks they talk about are going to be a lot more harmful. Why? Because the leak of data points like facial recognition are much less harmful than fingerprints, which are used in airports, and passports, and banks.

And we think that these are really the kinds of leaks that we're primarily worried about that they're never going to be able to account for. So we ask the opposition, what is uniquely harmful about facial recognition but not data points like fingerprints? And why are we going to ban the use of facial recognition but not, for example, fingerprint samples?

And just a third thing to say here is it's important to note how when they talk about how users don't know the future use of their content, we think that, one, we'd obviously-- like my first said, we'd rather have guidelines. But, 2, the true harmful recognition technology is things like DNA by companies like 23andMe that they largely ignore.

So what do we see at the end of this? One, the opposition claims protection of privacy are highly diminished to such a marginal benefit because we prove that there's still all these data violations in meaningful ways. But 2, on our side, we offer greater transparency to user, greater utility to user within private consumer firm relationships. And thus, this harm is so marginal because even if you believed at the end of this we get worse protection of data by private firms, we offer you a unique public safety benefit that the opposition will never be unable to match.

So just going on to that quickly, when we talk about safety in public world, what do they tell us here? They tell us that, one, we can just use DNA samples for stuff. We thought we were just going to ignore this because this is so ridiculous and it does not account for the majority of crimes that occur in public spaces which would go unaccounted for because there's just DNA everywhere. But then they also tell us about things like twins, which is also such a small example. And it's just probably symmetrical on either side.

But what can we tell you here? Why are governments likely to use technology in a way that best serves public utility? We tell you, first, we know that the governments are going to use the technology well because it's always going to be heavily scrutinised.

There's lots of fears, as my first told you, about systems like the Chinese social credit system. And the labour government, for example, is unlikely to overuse facial recognition because they'll get scrutiny by both the coalition and from the media. And we think that, B, people care about public security that is ensured through some use of facial recognition technology. And the government is likely to meet this and strike a balance between the use of facial recognition technology that serves, as my first said, as a sort of social contract where people are ensure safety at the expense of some privacy.

So what's the role of facial recognition in actually ensuring public safety? I'll just go over this again because they had a very marginal and small response to this in the first place. So, one, we get retribution for victims, even if people feel a little violated by cameras in public. They were going to be reassured that any crime against them, the criminal would be punished. That's always promised to them, that retribution. We think this is especially important, for example, like victims of sexual assault.

Two, we think there is a strong disincentive for criminals to commit crimes. That's to say we get less commonplace robberies in public areas, places like the CBD, both where cameras are going to be likely and crime is likely because there are so many people there. And 3, we probably also get less racism [bell ringing] and bias in the justice system because we think that facial recognition has always been uniquely better at matching a surveilled picture of a person to the actual person in real life than a police officer would be, for example.

That looks like there's a lot of room for prejudice when like a white police officer is matching a CCTV footage to a criminal because there's a lot of room for prejudice here of this all people of this race looking the same bias. And that's a claim that they could-- that's a harm that they were always going to get under their side, where we offered a more direct, a more honest, a more truthful, and a more correct attribution of a criminal to the actual crime.

So at the end of this, we can mitigate their benefits about private safety. We can match that, or we can even offer even greater transparency for the consumer. But they would never be able to actually offer the same public security benefits that we can on our side of the case. And for that reason, we have won today's debate. Happy finals. Thank you. [bell ringing]

[applause]

LUKA MILETIC: The third affirmative speaker Alexia will conclude their case.

[applause]

ALEXIA RIGONI: Two questions in this speech-- firstly, how is this technology likely to be used? And secondly, how did this use of technology impact individuals?

Firstly, I'm going to talk about governments because the opposition tells us that the government is likely to use this technology really well because, firstly, the government is risk-averse, and, secondly, because they can be held accountable through elections. We give you many reasons for why this is totally untrue. Firstly, the government doesn't always tell us when they do things.

When they use facial recognition technology, they don't always say so in an open way. In fact, they often keep it secret, meaning that individuals cannot use that as a mechanism to vote. They cannot vote on that.

We tell you, secondly, the idea that legislation is not often scrutinised is certainly true. So even if the opposition tried to make things public in a certain way, most people probably weren't up to date with how the government was using facial recognition technology because they had other things to do. They weren't particularly interested in government policy. It was unlikely they were going to be able to actually scrutinise how the government's use of this technology was impacting their lives. They probably couldn't vote on that in a way or hold the government accountable.

Thirdly, we tell you that some people are probably unlikely to think through the consequences of how their facial recognition technology is going to be used. People tend to be quite short-termist. People tend to not generally think about government policy in such a deep way. It was probably unlikely people would hold the government accountable for any sort of misuse of this technology or voice their opinion even if they didn't like it because they weren't really thinking that deeply about it.

We tell you fourthly, the notion that the government is risk-averse is completely ridiculous. We give you the idea of governments go to war all the time. They take risks. That probably wasn't a particularly strong argument.

We tell you the Robodebt example. The opposition continued to take money from people in an illegal way because they didn't think they'd be caught. That was an example of how the government wasn't risk-averse.

And, fifthly, we tell you, obviously, both major parties in Australia were likely to use facial recognition technology. It was totally unclear how-- it was unclear that this was something people could vote out even if they wanted to. We tell you, also, multifactorial voting is something that exists, so it was unclear that this was the one thing people were going to vote on.

Even if they disagreed with it, it was unclear that they could vote it out. It was unclear that they necessarily could, depending on the other policies that they wanted to. So the idea that the government was going to use this in a good way or people could potentially hold the government accountable for that was totally wrong.

The next thing we tell you is the opposition brings up an example of the criminal justice setting, how this is likely to be used in a really positive way. The first reason they give you is that it's likely to give more accurate evidence. The first thing we say is, firstly, we would question how much more accurate facial recognition technology, particularly if we brought you this at, second, in terms of fingerprint technology, is very useful. And the opposition glibly refuted this by saying like, oh, facial recognition technology is everywhere in CCTV cameras.

Obviously, CCTV cameras still existed. We're not disputing the use of photography ever or videos ever. We were disputing the mechanism that you could like attach a photo to someone and pick up lots of different data points. It was unclear how criminal justice couldn't be done through that.

We tell you, secondly, even if it increased the scale of the number of people that were convicted by like 2%, we would still trade off your right to your own images, your right to that image being connected to your data over that. So we didn't think that was a huge issue in this debate. We thought we won it even if it was.

Next, the opposition tells us that it's likely to deter crime. But obviously, as we reported this at second, people commit crime all the time even when there are many laws telling them that they can't. Even when there are signs that say no smoking, people still smoke because people generally don't think through the consequences of their actions. It was unclear why facial recognition technology was going to be the tipping point at which people believe that, considering facial recognition technology already existed. People were still committing crimes.

We tell you, secondly, the opposition's claim of feeling more protected by the state. And we think this was just ridiculous. Obviously, you probably didn't feel more protected by the state if the government had all of your information.

If anything, this is just going to open up more narratives surrounding how we were living in an Orwellian dystopia as already existed. It was probably going to make people more distrusting of the government. It's probably going to make people think that the government was out to get them, the government was tracking them as they had in every dystopian movie.

Thirdly, most ridiculously, the opposition brings up this notion of prejudice. It was totally unclear how facial recognition technology was going to drastically reduce the amount of prejudice in the legal system, especially considering that an individual was still looking at all of the facial recognition data or whatever. It was unclear why that was somehow going to be filtered out. Obviously, prejudice still existed within the legal system even though facial recognition technology still existed. We thought that was pretty ridiculous.

Lastly on this point, the opposition tells us that this is likely to minimise the wastage of resources because now people didn't have to go through 10 photos. They only had to go through one or whatever. We tell you, we happily trade off some people doing more work, or the wastage of more resources, or the spending of more money in order to protect the privacy of individuals.

The next thing the opposition tells us about how this technology is likely to be used is in protecting data. The first thing they say is fingerprints and passwords. We tell you, firstly, fingerprints and passwords were probably more than sufficient. Protecting data was unnecessary.

And we also tell you this was preferable because there was no potential for image-based abuse. There was no potential for other people to look up your behaviour 10 years ago, potentially use that against you. That was something the opposition never contended with.

But next, the opposition tells us that it was convenient to tag someone in a Facebook photo. We tell you, we do not care about convenience. We much prefer people to have their privacy protected, to have their rights protected in terms of if they did not want other people to see their photos.

That was a choice they could make for themselves. If they wanted other people to see it, then that was on them. But that wasn't a huge issue. We were willing to trade that off.

Lastly, I thought it was important to reframe how the opposition talked about the usage of this technology because they focused almost exclusively on governance. We tell you, obviously, companies are likely to use facial recognition technology. We tell you this could be used in a really harmful way that the opposition didn't grapple with.

For instance, we gave you the example of the company Bunnings unwillingly using people's facial recognition technology. If we impact that and explain why that's harmful, an employee, for instance, can see your facial recognition technology. They can see you have walked into a store. They can see your previous purchase history, potentially. They could potentially go up to you and guide you in a certain direction to make you buy certain products you didn't necessarily need because of your interest in the past because of [bell ringing] what you had purchased or googled on other sites.

We told you that was incredibly harmful because it made you spend more money. It was really inconvenient for your life. That was something that a company was likely to do. They were likely to exploit you because companies were intuitively profit-motivated. And so that was something we thought was really harmful that the opposition couldn't grapple with. And that was probably debate-winning material.

Next we tell you-- the next clash in this speech is how it is likely to impact individuals. The first thing the opposition tells us is that employers can still look you up. So the idea of image-based abuse probably wasn't-- anyone can still look you up, for instance. So if your employer was going to look you up, they could probably still find photos of you from years past.

We tell you that was probably untrue, because to the extent where you could delete photos online, for instance, they probably weren't going to show up in your social media feed if you were applying for a clerkship at Clayton Utz. Or if the photos were deleted, for instance, it was much harder to find.

Facial recognition technology was much more likely to pick those things up from the dregs of the internet that was something that was probably quite harmful that you wanted to hide that you wanted to remake your reputation. It allowed you to present a cultivated image of yourself to the world. That was something that was much-- that was something that was hindered when facial recognition technology was used that we thought was harmful.

We thought you could still get the benefits of someone being able to look up your social media feed, but only what you wanted to see, not necessarily some sort of sexually-explicit image, for instance, you had sent to someone when you were 14, for instance, that had been used against you. That was something we thought was harmful. We articulated that at first to very minimal response.

Next, the opposition tells us in monitoring search history was quite a comparable example in terms of how this was likely to impact individuals. We tell you, firstly, search history is quite different because it does not include your face. Your body, and we articulated this at first, was an intuitively inalienable part of your identity. The idea that somebody could watch you, and look at all of the different points on your face, and then connect that to your data profile, for instance, if you were an individual, that may not be a morally correct choice that you would want to make. [bell ringing] You should be able to make that decision for yourself.

Your search history was different because it didn't include a physical part of your body. We thought your body was quite different to the things you searched up on Google because it was intuitively yours. You had ownership over that. It was something that was difficult to explain. It was a principle, but it still existed.

The next thing the opposition tells you is that fingerprints were just as invasive. First of all, we tell you they just asserted this. But secondly, we say your face was probably much more invasive, because no one could look at you on the street and find your fingerprint.

They could probably do that with facial recognition technology. They'd seen you online, for instance. You were there in the flesh. That was something that the opposition couldn't contend with.

Lastly, the opposition most ridiculously tells us that the government can't regulate this when it's banned. It was a classic black market argument. Obviously, you can fine or imprison people if you find that they are using it.

But next, we tell you that responsible use or use in a way that might be helpful that the opposition wants to tell us is likely to happen could easily be traded off for your privacy, your use of your body. We thought that was really important. So for all those reasons, vote affirmative.

[applause]

LUKA MILETIC: The third negative speaker Anhaar will conclude their case.

[applause]

ANHAAR KAREEM: I think the most concrete harm the opposing team can give you that they spend the most time of their case on is you get surveilled in Bunnings. I think the most harms that we can give you are much more tangible and much worse. That was things like losing out on having more security risks, losing out on crime-solving technology, decreasing convenience. I think these were much more tangible and debate-winning, and was why we won this debate.

Two main issues I'm going to be looking at. Firstly, on the principle which side best upheld people's principled rights, and then, secondly, on the practical. Firstly, on the principle, I'm just going to run through this really quickly because I think the opposing team gives you some pretty flimsy analysis about principle. It's quite intangible. But I think they tell you 3 main things.

The first thing they tell you is that people cannot consent to facial recognition and technology. Three responses to that. I think the first is to say that when they set up a mechanism for that is the fact that it's unpredictable and people don't know what they're consenting to.

I think this can be reframed in a really positive way, which is to say the fact that the scope of facial recognition technology was big was probably a good thing. It was something that people could probably benefit from. I think they're pretty uncharitable in the fact that they tell you that having a big scope of facial recognition technology possibilities is bad. I think it could be used for a lot of benefits.

But I think the second thing is I think that people probably can consent, because I think there's so much scrutiny that we've told you down the bench, that things like the fact that the media is really likely to capitalise on this-- I think the moral panic about technology is so huge. It is so easy for the media to buy into this and get people to be really scared about whether or not their privacy is being traded off.

I think also the public also just has an incentive to care. This is their privacy. I think there are already people that are pretty cautious of whether or not they're being surveilled. I think people are probably pretty vigilant and pretty likely to monitor this.

I think the final thing is even if people can't consent, we just didn't really care. We were very happy to trade this off, because I think that if you're thinking about people being aided and the trade-off is consent, this is something we do heaps in society, which is good. We can raid people's homes. We can collect DNA. Those are so many examples of ways in which we are able to achieve more safety by trading off consent. It was something we already did and something we thought was probably pretty beneficial.

I think the second thing the opposing team tries to tell you is that you are violating people's inalienable right to privacy. Four responses; firstly, I think in a lot of cases, this privacy is consented to. So for instance, face ID on your phone is something you can opt out of. Things like Face ID or Apple Wallet are things you can pretty easily opt out of.

But I think, secondly, it's a pretty fair trade-off. So I think the whole idea of the social contract that we've told you down the bench is the fact that crime could happen to you. You were willing to do things like trade off your right to privacy because maybe when that crime happens, it's solved faster. Maybe there's less risk of having a national security threat. I think those were worthwhile trade-offs that people were willing to make.

I think, thirdly, then, this was just pretty applicable to their side. I think other breaches of privacy still exist. The government can do things like monitor telephones for buzzwords to prevent terrorism. There are so many other ways that it happened, they needed to-- they had a really high burden, which was why to prove that facial recognition was just so uniquely harmful that you had to get rid of it.

I think the final thing is I think privacy actually gets a lot worse on their side. That's for 2 main reasons. Firstly, I think companies can still use facial recognition. But because it's not monitored, they're likely to do this in a way which is less transparent.

But secondly, they're likely to just crack down on other forms of surveillance, which it probably was, again, less transparent. I think we proved to you why privacy is better on our side. But even if it wasn't, we were willing to trade off.

I think finally the opposing team tries to tell you that the government is not held accountable. This is principally wrong. Two responses; firstly, there is such a huge incentive, as we've told you, for people in the media to hold the government accountable.

I think people are pretty worried about things like privacy. People see things like the social credit system in China and think, well, I don't want that to happen in Australia. I think people are likely to question government decisions and be sceptical about how their data is being used.

I think this also applies to companies. I think Facebook, like Analytico, when they had data breaches, people were pretty mad about that. I think people are very able to hold companies to account.

But I think, secondly, we told you that it was people that were likely to be risk-averse, not the government. So don't like the opposing team try and get away with misrepresenting our case. But I think what we tell you is that there are much bigger impacts.

So for instance, the fact that like-- sorry, when the opposing team tries to tell you that there are other factors in voting, for instance, people are likely to disregard privacy and prioritise other things. One, I think it's a big factor because it implements like influences everyone. And we told you it was something people cared about.

But 2, governments were still likely to listen even if it wasn't debate-winning. A government doesn't want to look really bad for being the government that infringed on people's privacy. I think that was a pretty clear democratic mechanism we were able to prove.

At the end of this principled argument, I think we proved to you 3 things. Firstly, we maximise transparency and therefore achieve this principle. Secondly, that other principles that competed also existed. For instance, the government had a principle obligation to look after people. That was something we were able to fulfil.

But finally, say consent and privacy is worse on our side, that's fine because we told you we got safety. I think they need to prove to you that why this was so uniquely principally bad, because I think at the end of this debate, what we end up with is technology on their side, which is not as principally effective-- sorry, not as practically effective, but has the same principle harms. That is why we won today's debate.

On to the second issue, then, why do we think that we want in terms of practical benefits? I think we hear 4 main claims from the opposing team. The first thing they tell you is that data breaches are likely to happen.

Three responses; firstly, so many incentives for companies not to do this. Their money, their legitimacy, they don't want to get in trouble and get fined. It's so unclear why companies are likely to engage in data breaches.

I think secondly-- oh, sorry, and I think they're quite likely to get caught because there are so many actors that are holding them accountable. There is the government. There is the media.

But I think, secondly, data breaches probably also happened on their side to the exact same extent. People could leak your Pay ID. People could leak your fingerprints. These were just as harmful as what we told you.

Second thing that they try and tell you is that people are going to obtain photos of you. And this is bad for 3 reasons. Firstly, employment opportunities; 2 responses. Firstly, this is probably good. I think we would support companies being able to employ people on the basis of what they've done in their past and how good they are as employees.

But I think, secondly, it happens in their world, as well. You still have social media. If you're an employer and want to employ someone, you can literally just look at their private social media account from when they were 8. That's so symmetric.

But I think then the next thing they try and tell you is that blackmail and harassment is likely to happen on their side. Two responses; [bell ringing] firstly, I think this can be mitigated so easily because there are so many other factors other than facial recognition systems that cause this. That's the fact that there's probably someone close to you that holds this information that they can blackmail you with.

It probably happens in other ways. For instance, someone texts you a Pay ID and is like, I'm going to scam you; please do this. It can probably be monitored better on our side when it's legal. I think we're able to mitigate that hugely.

But I think, secondly, it's super uncharitable, because I think what the opposing team does, is they capitalise on the worst possible ways that people can use these facial recognition systems, and they absolutely ignore all the benefits that we give you. I think the benefits we give you are about safety, security would always trump in this debate.

But I think, finally, they tell you that companies are going to exploit people, for example, in the case of Bunnings. Two responses; firstly, this is literally illegal, and people know about it. There's pretty big outrage about Bunnings not doing this, about not informing people. I think people are likely to hold them to account.

But I think, secondly, even if they do this, it's probably OK. It happens anyways. On social media, you literally sell your data, and Facebook gives you targeted ads. It happens. It probably just makes your shopping experience more convenient. It doesn't seem like a huge harm.

OK, I think the third thing they try and tell you is they try and then mitigate our benefits by telling you that there are heaps of mistakes with ID. Two responses; firstly, I think that they're actually like hugely better because you don't get human error. When they tell you things like DNA is more effective as facial recognition, facial recognition can immediately identify a pair through literal technology.

DNA is something that happens in labs. You have one person who is corrupt or one person who messes up, a bunch of people get imprisoned wrongly. That was something they had to deal with.

But I think, secondly, I think it can be much better systemically removed. That is to say that if you have a biassed police officer or a biassed witness, that was probably something you couldn't monitor for and something you probably couldn't change. If you had, say, biassed AI systems, that was something you could systemically reroute. I think we proved to you why mistakes probably happen less on our side.

The next thing they try and tell you is that this isn't going to work as a deterrent for crime. Three responses; firstly, it definitely does. If you were walking down the street, and if you stole from someone or harmed someone, there would be footage of that, [bell ringing] and that you would be able to identify, I think you were much less likely to do that crime.

But secondly, they try and tell you that this won't happen in crimes of passion or where people have optimism bias. Fine, maybe that happens in like 70% of crimes. Even if we just fix 2% of crimes, I think this was still worth it. But it was probably heaps more than that.

But I think, finally, even if it was not a deterrence, we gave you other crime benefits, things like the fact that you were able to get retribution. You were able to get justice better. Those still stood in this debate.

I think their final claim that they try and tell you, and they bring up a third, is distrust. Two responses; firstly, I think distrust increases when there is no information and you know that something is bad, but you think it still happens. But I think, secondly, we'd be able to trade this off for all the kind of impacts we gave you.

At the end of this debate, you should know that facial recognition was better because there was less human error. It was way more widespread and effective. And that just means that, practically, it was much better than something like DNA technology.

But principally, it was the same in terms of privacy. Our worst case was one where maybe facial recognition helped sometimes and made some errors. Note that their absolute best case was one where other forms of security were implemented [bell ringing] and were not nearly as effective. For that reason, I think we won this debate.

[applause]

LUKA MILETIC: A member of the adjudication panel will now deliver the adjudication and announce the result of the debate.

[applause]

ELLIE STEPHENSON: Hi, everyone. Thanks very much for such a fascinating and obviously quite current debate. And congratulations to both teams for reaching the state final. It's a very large achievement. And I think that both teams should be extremely proud of themselves for making it here and delivering such an interesting debate. So just to start, can we please congratulate both teams?

[applause]

And also, can we please congratulate the teams' teachers and coaches who are, of course, invaluable for them, as well?

[applause]

And finally, they didn't materialise in the state final. There have been an enormous number of really talented debaters, really talented teams in the path to this final. So congratulations to those teams around the state and their coaches.

[applause]

So let's get into it. I'm going to save on the general feedback because this was obviously a fascinating and quite good debate. So I don't think that we need that. We're just going to get into it.

And I'll explain the result of the panel, which was unanimous. So to explain that, I'm going to ask 2 questions. Firstly, does facial recognition present an unjust imposition on the right to privacy? And secondly, does facial recognition provide unique benefits or harms with respect to security and safety?

So firstly, on the principled clash, which I thought was super interesting and lovely to see in this debate, what do we hear? Affirmative on this issue explains that people should have rights over their own image; sort of property rights in the sense that they uniquely possess their body.

It's very important to them. And it matters when other people invade that privacy. I think that this is clearly true in a lot of instances. Although, I do think we possibly could have had more analysis as to why this right is inviolable or should not be traded off with other rights.

Secondly, then, they explain that people specifically are unable to consent to the scope and risks of facial data collection. That is to say that facial data collection and storage has expanded rapidly. It's very unpredictable. And people often don't understand what they're signing up for when they agree to it.

Negative has a series of responses to this argument. Their first response is that we frequently trade off privacy rights in order to achieve public safety and consumer benefits. That is to say that the government already collects huge amount of data on us, monitors our phone calls, checks our search history, and that the public pretty frequently consents to giving their data to big companies to using things like face ID on their iPhone out of convenience, and that those are fair trade-offs that society can reasonably make within a social contract.

I think that this is quite a clever response to affirmative's case because although it admits that sometimes privacy rights do exist, I think it notes that we probably have to engage in trade-offs. And privacy, in itself, is perhaps not sufficient to win this debate.

Additionally, negative has some response to the idea that people cannot consent to the current scope of facial data collection. They explain that people pretty frequently continue to use and consent to using services that involve facial data collection, but, additionally, that there are other ways to protect privacy such as regulating improved transparency and regulation. Again, I don't think this is sufficient to win this clash, but it does damage the extent to which a ban is uniquely required to protect privacy. It can possibly occur in other ways.

So what that means, is at the end of this issue, although I think affirmative does a really good job of explaining why privacy is so important in society, I think negative is able to explain that perhaps a ban is not the only way to protect people's privacy, and, additionally, that it would be permissible to make trade-offs for privacy if we could achieve benefits for public safety. So on the principle, then, I don't think that this is determinative. I think it's a really interesting principle contribution, but perhaps does not solve the issue of facial data collection.

So secondly, then, on the practical clash, does facial recognition provide a unique benefits or harms with respect to safety and security? Just as a note here, both teams in this debate note that there are a series of other instances where personal data are collected and stored. So this means that both teams need to and sometimes seem to struggle a bit to distinguish their benefits and harms from other things like CCTV, DNA collection, and fingerprint matching, which at times narrows the debate and makes it difficult to identify what specifically is unique about facial recognition as opposed to other forms of data collection.

With that said, what do we hear on this issue? The first part of this issue is a clash about the extent to which regulation is possible because a key negative claim is that insofar as facial data collection and facial recognition technology can be bad, it is preferable to regulate it rather than ban it and then not be able to regulate it effectively.

Just as a note on this argument, we think it's a little bit unclear why the government would be unable to successfully regulate or monitor facial recognition technology once it is banned. We think that to the extent that negative argues that there is enough political capital and enough pressure from the population to regulate facial data collection and facial recognition, while it is legal, it was a bit unclear why there wouldn't be similar political appetite to ensure that a ban was enforced. So we thought that while this was an interesting argument, negative probably did have to admit that affirmative was able to successfully enforce their ban on data collection.

But the upshot of that is still that regulation is also possible for negative. So insofar as affirmative can model in a successful ban that is enforced and stops facial recognition, negative is able to counter-propose the use of regulations. What regulations those involve are left a little bit vague in this debate. But negative does explain that regulation could look like ensuring transparency and ensuring that consumers know what sorts of facial recognition software the government and corporations are doing.

So at the end of this clash on regulation, we think that negative is probably right that there is an appetite for regulation or for oversight of data. This means that affirmative can successfully propose a ban, but also that some other alternatives might be possible, as well. Again, this isn't determinative in the debate, but it is a pretty important characterisation clash for weighing some subsequent issues.

All right, on to the meat of this issue, then, which is weighing the personal harms that affirmative describes against the security benefits that negative describes. Starting with those personal harms, essentially, the affirmative argument here is to note that this constitutes a huge amount of data on people's faces that is linked to other forms of data about their lives, and suggests that this poses a significant risk to those people's safety and to those people's agency.

What are the risks described by affirmative? The first risk that we hear about is about people's right to protect their reputation. So affirmative suggests that this could be something which makes it hard for people to get jobs or means that people struggle to separate themselves from past actions. Their face becomes overly recognisable and easily searchable.

I think that this harm is, I suppose, a little bit asserted, in the sense that it's a bit unclear to the panel why people have this right to a reputation. If people did things in the past, why they deserve a right not to be associated with those things. But we also thought that this was sufficiently pushed back upon by side negative who explained that it is often legitimate for companies to, for example, know things about people's past. And so while this monitoring might in some cases be bad, we weren't sure that it would be on net.

The second push that we get from affirmative is that this could just be used to impersonate people or to lead to data breaches. For example, people could do things like use masks to impersonate others. This certainly seems like something that could happen, although we did think it was a little bit speculative in the sense that it is a bit unclear why that is the preferred or likely strategy, I suppose, of criminals if they want to do this, and why it would happen frequently. Additionally, I think it's fair for negative to point out that a lot of these harms seem a bit symmetrical in that they can occur if you have access to things like CCTV or other information about a person. It's unclear why this is the dominant or most important route to impersonation.

Finally, affirmative suggests that corporations might use this data in ways which harms people by manipulating their preferences, by trying to sell them things on the basis of having tracked their face. I think this is a really interesting argument. And it would have been nice to hear perhaps a bit more about it.

But I do think that the negative response is sufficient where they explain that this isn't always a harm. Perhaps it just makes your shopping experiences more convenient. But more importantly, that this is a bit indistinct from other forms of monitoring. For example, shops already track your data in a very large way.

Finally, affirmative explains that there is just a big risk of data breaches. There's problems with hacking. People are getting into information all the time.

I think that negative does a lot of work in response to this argument to explain that problems with hacking are symmetrical, that people often get access to things like your Pay ID, to things like your address, or your email address, or your passwords. And it's unclear why those things are any less damaging to your life than your face. The response we get from affirmative is, well, your face is attached to your body, and it's very difficult to change, which I think is fair, but probably true for some of the other forms of personal data described by negative.

What that means is at the end of this series of harms, we certainly think that there are some big risks when it comes to facial recognition. But it was a little bit difficult to see why those things were unique to facial recognition and not a problem of just having very large amounts of data on people. Let's weigh them, then, against the benefits that we hear from negative.

Negative essentially explains 3 benefits, the first one being probably the largest. So the first benefit is about criminal justice. They suggest that facial recognition could be a really good way to contribute to legal cases by identifying offenders and making legal cases more efficient.

The response that we get from side affirmative is that potentially this is not very unique. You can still do things like DNA collection. You can still recognise and have witnesses describe people who have offended. I think that this is certainly true and potentially mitigates the need for this change. But I think it probably does concede that there are some benefits to be garnered.

And what we hear from side negative in response to that rebuttal is to suggest that facial recognition is extremely public. There's a lot of it. And it's easier to get and recognise people than something perhaps more specific and physical like DNA. What that explains is I think that negative probably does get a small benefit, but certainly a benefit when it comes to criminal justice and recognising people when you need to for security reasons.

Secondly, and relatedly, they explain that this data collection and facial recognition has been used successfully in places like airports for a long time, that it adds a degree of security that keeps all of us safe. I don't think this argument gets a huge amount of explanation, but I think it is an observable truth about our society that this data is often used in the way that negative describes. And although, again, affirmative suggests that perhaps we could use other ways to achieve those same benefits, things like fingerprints, it's unclear necessarily why those are preferable to facial recognition given that negative explains that facial recognition is particularly unique to a person, particularly easy to incorporate into these systems. And therefore, I think they get to claim, again, a small benefit here.

The final push that we hear a bit about, and the panel agreed we would have heard-- we would have liked to hear a lot more about was data security, where we hear briefly at second negative and then again at third negative, that actually facial recognition can be useful for protecting data, for example, encryption that uses biometrics to let you in, and that might be more secure than things like passwords. We thought that this probably required a little bit more explanation to credit as a big benefit because we didn't hear a lot of explanation as to why this was preferable to things like passwords.

But I think that it does create a degree of symmetry with the problems that negative describes about things like data breaches because if it is true that there are risks associated with facial recognition, then negative explains that there are also potential benefits, the possibility of accessing better data security by using biometrics. So that, again, perhaps isn't a massive benefit, but I think it is something that weighs into the debate.

So what does the panel believe at the end of the debate? We didn't necessarily think that facial recognition represented, in and of itself, an unjust imposition on the right to privacy. But we did think it created a series of risks and a series of potential benefits.

However, ultimately, we thought that the set of benefits described by the negative team seemed to be more unique to facial recognition as opposed to the set of risks described by affirmative, which seemed like they might just be problems with the sort of data that we collect more broadly. So for those reasons, we weighed the set of risks described by affirmative slightly less than the set of benefits described by negative, and ultimately awarded the debate in a very close decision to side negative. Congratulations.

[applause]


End of transcript