Harry Surden /law/ en Harry Surden to Serve as Interim Executive Director of Silicon Flatirons /law/2023/04/06/harry-surden-serve-interim-executive-director-silicon-flatirons Harry Surden to Serve as Interim Executive Director of Silicon Flatirons Anonymous (not verified) Thu, 04/06/2023 - 16:08 Categories: Faculty Harry Surden News Silicon Flatirons Tags: Silicon Flatirons homepage faculty news homepage news

The University of Colorado Law School is pleased to announce the appointment of  as interim executive director of the . Surden, a beloved professor of law at Colorado Law, brings extensive experience in the tech industry and legal academy to this role.

Silicon Flatirons is a leading research center at the University of Colorado Law School that focuses on the intersection of technology and policy. As interim executive director, Professor Surden will oversee the center's operations, including events and research initiatives.

"I'm thrilled to serve Silicon Flatirons as interim executive director," said Professor Surden. "The center has a well-earned national reputation for fostering innovation and exploring the impact of technology on society, and I'm excited to play a new part in building on that legacy."

Professor Surden has been a faculty member at the University of Colorado Law School since 2008, where he teaches courses on intellectual property, patent law, torts, and artificial intelligence. He is also affiliated faculty at the . Prior to law school, Professor Surden worked as a software engineer for Cisco Systems and Bloomberg Finance L.P.

"We are thrilled Harry has agreed to serve as interim executive director of Silicon Flatirons," said , dean of the University of Colorado Law School. "His cutting-edge scholarship, passion for teaching, and commitment to the center make him the ideal person to lead Silicon Flatirons during this transitional period."

Silicon Flatirons will continue in its mission of fostering innovation and exploring the impact of technology on society under Surden's leadership. The center's upcoming event, "," will take place on April 21, 2023 and will explore the role of Generative AI in several legal contexts.

The first is the ability of Large Language Model AI Systems (LLMs) like ChatGPT to produce legal documents, such as contracts, legal pleadings, patents and other written legal instruments. Another is the impact of AI-generated art and music in the context of Copyright law and Fair Use. Finally, they will examine the societal implications of AI systems that can produce high quality outputs – such as legal documents or art – that previously only people could create. The conference will bring together leading experts in the fields of law and AI to discuss the legal and policy implications of generative AI.

For more information about Silicon Flatirons and the upcoming "Exploring Generative AI and Law" conference, please visit .

In the summer of 2023, Professor will assume the role of executive director of the center. We are excited to share more about his appointment in the coming weeks!

The University of Colorado Law School is pleased to announce the appointment of Harry Surden as interim executive director of the Silicon Flatirons Center for Law, Technology, and Entrepreneurship. Surden, a beloved professor of law at Colorado Law, brings extensive experience in the tech industry and legal academy to this role.

Off

Traditional 0 On White ]]>
Thu, 06 Apr 2023 22:08:40 +0000 Anonymous 11669 at /law
Professor Harry Surden Delivers 2022 Austin W. Scott Jr. Lecture /law/2022/11/17/professor-harry-surden-delivers-2022-austin-w-scott-jr-lecture Professor Harry Surden Delivers 2022 Austin W. Scott Jr. Lecture Anonymous (not verified) Thu, 11/17/2022 - 16:34 Categories: Harry Surden News Silicon Flatirons Tags: Silicon Flatirons homepage faculty news homepage news

, Professor of Law and director of the , delivered the 47th annual Austin W. Scott, Jr. Lecture at the University of Colorado Law School on November 10. The lecture was titled “Artificial Intelligence and Law”. The Scott lecture is presented annually by a member of the faculty engaged in a significant scholarly project who is selected by the dean. 

In this lecture, presented in a hybrid format, Professor Surden—himself a former software engineer and leader of the emerging interdisciplinary field of AI and Law—explored: What is Artificial Intelligence? How is law affecting Artificial Intelligence? What are the major issues involving AI, law, and society today and in the near future? 

Artificial Intelligence (AI) is much in the news these days. Professor Surden explained that as a concept, AI could seem completely unrelated to the field of law, however AI and Law are intricately intertwined and are becoming more so each day. Law regulates artificial intelligence, but artificial intelligence also affects the practice of law. 

Professor Surden devoted a good portion of the lecture to explaining in broad strokes how AI actually works, what it is and what it is not. His remarks emphasized that in order to understand the intersection of AI and the law, we must seek to understand the technology in its own right. Surden explored the limits of AI as it exists today and how it differs from the way popular culture often portrays it.

Ultimately, AI is neither inherently good or bad for law, explained Professor Surden. It holds potential for a fairer legal system or, if used improperly, one that is less fair and more prone to bias.

鶹ӰԺ Professor Harry Surden

Harry Surden is a Professor of Law at the University of Colorado Law School. He joined the faculty in 2008. His scholarship centers upon artificial intelligence and law, legal informatics and legal automation (including machine learning and law), self-driving cars and law, intellectual property law with a substantive focus on patents and copyright, information privacy law, and the application of computer technology within the legal system.

Prior to joining CU, Professor Surden was a resident fellow at the Stanford Center for Legal Informatics (CodeX) at Stanford Law School. In that capacity, Professor Surden conducted interdisciplinary research with collaborators from the Stanford School of Engineering exploring the application of computer technology towards improving the legal system. He was also a member of the Stanford Intellectual Property Litigation Clearinghouse and the director of the Computer Science and Law Initiative.

Professor Surden was law clerk to the Honorable Martin J. Jenkins of the United States District Court for the Northern District of California in San Francisco. He received his law degree from Stanford Law School with honors and was the recipient of the Stanford Law Intellectual Property Writing Award.

Prior to law school, Professor Surden worked as a software engineer for Cisco Systems and Bloomberg Finance L.P. He received his undergraduate degree with honors from Cornell University.

Professor Surden is an Affiliated Faculty Member at

鶹ӰԺ the Annual Austin W. Scott, Jr. Lecture Series

The Austin W. Scott Jr. lecture is named after Austin Scott, a member of the law school faculty for 20 years. He was a beloved teacher as well as a prolific writer, whose scholarly work was in the fields of criminal law and procedure. In 1973, former Colorado Law Dean Don W. Sears established the lecture series in his memory. Each year, the dean of the law school selects a member of the faculty engaged in a significant scholarly project to lecture on his or her research. Learn more about the Austin W. Scott Jr. Lecture.

See recordings of recent lectures on the .

[video:https://www.youtube.com/watch?v=zQdKg0i7qNY]

Harry Surden, Professor of Law and director of the Silicon Flatirons Center Artificial Intelligence Initiative, delivered the 47th annual Austin W. Scott, Jr. Lecture at the University of Colorado Law School on November 10. The lecture was titled “Artificial Intelligence and Law”.

Off

Traditional 0 On White ]]>
Thu, 17 Nov 2022 23:34:15 +0000 Anonymous 11363 at /law
Harry Surden: Colorado Roofing Company Skyyguard Now in Third Year Fighting “Trademark Bullies” Skyy Vodka | The Denver Post /law/2021/02/10/harry-surden-colorado-roofing-company-skyyguard-now-third-year-fighting-trademark-bullies Harry Surden: Colorado Roofing Company Skyyguard Now in Third Year Fighting “Trademark Bullies” Skyy Vodka | The Denver Post Anonymous (not verified) Wed, 02/10/2021 - 00:00 Categories: Faculty in the News Harry Surden Tags: 2021 window.location.href = `https://www.denverpost.com/2021/02/10/colorado-roofing-company-skyyguard-trademark-skyy-vodka/`;

Off

Traditional 0 On White ]]>
Wed, 10 Feb 2021 07:00:00 +0000 Anonymous 10501 at /law
Harry Surden: Artificial Intelligence in Government and the Law | The Regulatory Review /law/2019/12/21/harry-surden-artificial-intelligence-government-and-law-regulatory-review Harry Surden: Artificial Intelligence in Government and the Law | The Regulatory Review Anonymous (not verified) Sat, 12/21/2019 - 00:00 Categories: Faculty in the News Harry Surden Tags: 2019 window.location.href = `https://www.theregreview.org/2019/12/21/saturday-seminar-artificial-intelligence-government-law/`;

Off

Traditional 0 On White ]]>
Sat, 21 Dec 2019 07:00:00 +0000 Anonymous 9273 at /law
Harry Surden: AI, Few Guardrails—The Lawyer’s Response | Bloomberg Law /law/2019/11/04/harry-surden-ai-few-guardrails-lawyers-response-bloomberg-law Harry Surden: AI, Few Guardrails—The Lawyer’s Response | Bloomberg Law Anonymous (not verified) Mon, 11/04/2019 - 00:00 Categories: Faculty in the News Harry Surden Tags: 2019

Harry Surden discusses the future of law in a world dominated by artificial intelligence.

window.location.href = `https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-ai-few-guardrails-the-lawyers-response`;

Off

Traditional 0 On White ]]>
Mon, 04 Nov 2019 07:00:00 +0000 Anonymous 9021 at /law
Harry Surden: The Morality of AI in Law | Legal Theory Blog /law/2019/08/27/harry-surden-morality-ai-law-legal-theory-blog Harry Surden: The Morality of AI in Law | Legal Theory Blog Anonymous (not verified) Tue, 08/27/2019 - 00:00 Categories: Faculty in the News Harry Surden Tags: 2019 window.location.href = `https://lsolum.typepad.com/legaltheory/2019/08/surden-on-the-morality-of-ai-in-law.html`;

Off

Traditional 0 On White ]]>
Tue, 27 Aug 2019 06:00:00 +0000 Anonymous 9193 at /law
Artificial Intelligence and Law /law/2019/05/03/artificial-intelligence-and-law Artificial Intelligence and Law Anonymous (not verified) Fri, 05/03/2019 - 15:06 Categories: Amicus Spring 2019 Harry Surden Susan Miller ('19)

What impact will the technology that enables self-driving cars, robots, and drones have on the legal profession?

From films to news headlines, artificial intelligence, or AI, is often portrayed as a threat to many modern professions. It's only logical, then, for lawyers to wonder: Should they be worried or enthusiastic? Will AI take over the legal profession as we know it—or will it bring more access to legal services and enable improved lawyering?

Associate Professor , a distinguished scholar in the areas of AI and law, regulation of autonomous vehicles, and legal automation, suggests most legal careers will remain safe. Rather than replacing lawyers, he says, AI can actually enhance legal work by streamlining mechanical tasks, thus providing attorneys with more time to spend on abstract reasoning and problem-solving.

From software engineer to law professor

Surden’s background is somewhat unconventional for a law professor. As an undergraduate student, he simultaneously pursued courses in computer science and political science, wondering to what extent computer science might apply to law and policy. After working as a professional software engineer for several years, he decided to explore this cross-disciplinary approach further in law school. He earned his JD from Stanford before clerking for a federal judge in San Francisco. From there, he returned to his alma mater as a researcher, where he further pursued this idea of computer science applied to law—sometimes called "legal informatics." In 2006, he helped co-found the Stanford Center for Legal Informatics (CodeX) and served as its first research fellow. In that role, Surden helped develop a proof-of-concept research project that allowed architects to automatically determine when their electronic building designs were in compliance with local building code laws.

"Professor Surden has always dedicated his research in AI and law to questions of immediate relevance to the field,” said Roland Vogl, a professor of law at Stanford and executive director of CodeX. “He has an incredible ability to explore the topics of his research thoroughly, while still presenting very complex issues in a way that makes them accessible to lawyers and computer scientists alike.”

Surden joined the faculty of Colorado Law in 2008, where his scholarship has included such articles as "" (Washington Law Review), "" (Cardozo Law Review), and " (UC Davis Law Review). He teaches technology- and law-related courses such as Patent Law and Computers and the Law.

Over the years, Surden’s academic research interest began to crystalize on a particular aspect of computer science and law: artificial intelligence. He was drawn to AI in the early 2000s as he observed AI techniques moving out of university laboratories and becoming widely integrated throughout society. At that time, AI was comparatively understudied as a topic within law. While researchers today are more attuned to AI, Surden remains part of a relatively small group of law professors who are not only studying the impact of AI on law and policy, but who are also building software applications that use AI on legal topics. Surden’s research has focused on applying intelligence techniques to various problems in patent and contract law, and, in 2018, he was awarded the University of Colorado’s Provost Award for his research on legal informatics.

How the legal profession can use machine learning

  E-discovery document review: AI can improve organization and reduce the amount of discovery clutter by sorting through millions of e-discovery documents and filtering out pages that are irrelevant to a case.

  Litigation predictive analysis: By leveraging data from past client scenarios and other relevant public and private data, AI can predict future likely outcomes on particular legal issues that could complement legal counseling.

  Legal research: AI can improve organization by grouping documents together based on nonobvious shared qualities, thereby simplifying the research process and saving attorneys time

"Professor Surden's work on autonomous vehicles is important for both consumer protection and the policy of technological design," said Colorado Law Associate Professor Margot Kaminski, whose own research on the law of information governance, privacy, and freedom of expression overlaps with Surden’s as it relates to autonomous systems such as AI, robots, and drones. The pair organized a May conference in partnership with the law school’s Silicon Flatirons Center for Law, Technology, and Entrepreneurship titled “Explainable Artificial Intelligence: Can We Hold Machines Accountable?” (more information available at ).

"[Surden’s work] shifts the conversation away from the over-discussed 'trolley problem' (that is, the question of how to decide who gets hit when a car is offered a choice between two people) to the more pressing question of how to design an entire environment of interaction between autonomous cars and human drivers," Kaminski said. "That's where the harder, more practical questions lie. By doing interdisciplinary research—he's one of few law professors to collaborate with a roboticist—Professor Surden is a trailblazer in this field."

The AI of today

The AI depicted in science fiction and the media as intelligent computers capable of discussing deep, abstract, and insightful ideas with humans, or acting at a level that meets or surpasses general human intelligence, is not the AI that we use or have today, nor is there evidence that we are near such "strong" AI, Surden says.

Rather, AI today is best understood as using computers to solve problems and make automated decisions that, when done by humans, are usually thought to require intelligence, Surden says. However, he notes that these automated decisions are typically based not on artificial human-level intelligence, but on algorithms detecting patterns in large amounts of data, and using statistics to make educated approximations—known as machine learning.

The dominant approach to AI today, machine learning techniques are often able to produce useful, accurate outcomes in certain domains such as language translation. But, because they rely on detecting complex patterns in data, Surden explains them as “producing intelligent results without intelligence."

For example, when a machine learning-based computer system produces a translation, it usually does so using statistical associations. However, such a pattern-based machine learning approach—while often producing decent translations—does not actually involve the computer “understanding” what it is translating or what the words mean in the same way a human translator might.

Despite these limitations, machine learning has been instrumental in producing many recent breakthrough technologies. For example, as Surden explains in "Technological Opacity, Predictability, and Self-Driving Cars," algorithms in autonomous vehicles learn to drive themselves by detecting patterns of braking, steering, and acceleration based on data from human drivers. Other popular machine learning applications include an email spam filter that uses algorithms to detect common words or phrases used in spam to filter out emails that may clog inboxes; credit card fraud detection; and automated cancer tumor diagnosis.

Surden presents at BYU Law School about how the values and biases in artificial intelligence impact the justice system. (Matt Imbler/BYU Law)

AI and the law

How does machine learning apply to the field of law? In his widely cited article "Machine Learning and Law," Surden notes that a limited number of legal tasks may benefit from current machine learning approaches. Core tasks still require a great amount of problem-solving and abstract reasoning that pattern recognition or machine learning is unable to replicate. However, a fair number of relatively mechanical tasks within law can benefit from AI, such as e-discovery document review, litigation predictive analysis, and legal research.

E-discovery document review is an example of machine learning starting to make inroads into legal tasks that have traditionally been performed by lawyers. Like email spam filters, AI can detect patterns in documents that can then be used to sort through the millions of e-discovery documents and filter out pages that are likely irrelevant to the case. This in turn leaves far fewer potentially relevant documents for attorneys to analyze.

Additionally, AI can be used for predictive analysis in litigation. Surden explains that while attorneys in the past might have told clients that they had an 80 percent chance of early settlement based on experience and intuition, AI can provide substantive support. By using data based on similar cases, claims, or facts of a scenario, AI can predict potential outcomes or even show trends in a timeline. However, one downfall noted by Surden is the difficulty in predicting outcomes for unique cases with distinct fact patterns.

Finally, other more controversial uses of AI in the law exist, such as the use of AI in criminal sentencing or in providing statistics on the probability of reoffending. The patterns in past data on criminal sentencing may contain biases that a machine cannot detect, and reliance on AI would preserve such biases into the future. Thus, while AI may not be suited to all legal tasks, certain assignments may be done more effectively and efficiently by using AI.

There are many other examples of ways in which AI can be used in law. Surden’s research has focused on so-called "computable contracts": legal contracts in which the content and the meaning of the contractual obligations are represented in a way that can be understood and automatically applied by computers. Surden has convened a working group at Stanford that is focused on moving this process out of the university laboratory and into the world. His other research has focused on ways in which machine learning can lower barriers to access to legal services for low-income communities.

The AI of tomorrow—and beyond

The use of AI in mechanical tasks will likely continue to expand, and Surden suggests that law students position themselves in an area of law that requires abstract reasoning rather than repetitive tasks that will soon become obsolete. However, there are limits to the use of AI in law. For example, AI still requires patterns and rules and is ineffective for unique fact patterns and distinct cases. AI still cannot complete the abstract reasoning that attorneys carry out, and it is unlikely such complex functions will be automated anytime soon. Finally, Surden notes that while speculation on futuristic cognitive AI is tempting, it is better to understand the existing technology and plan accordingly.

Thus, while AI is likely to replace some legal tasks that today involve mechanical repetition or underlying patterns, lawyers do a variety of things such as advising clients, problem-solving, formulating persuasive arguments, and interpersonal activity—that are unlikely to be automated away soon. However, Surden cautions that we shouldn’t focus only on the job-reducing aspects of new technology.

Historically, while new technologies have often reduced certain jobs, they have also created entirely new classes of jobs that were difficult to anticipate. For example, the rise of computing technology eliminated many jobs involving humans who computed mathematical problems for a living, but that same technology gave rise to entirely new classes of jobs, such as data analysts and software engineers, that didn’t exist and that were hard to predict at the time. Surden says there is likely to be a similar path in law.

"Although AI’s entry into law is likely to eliminate or reduce some existing legal tasks, it is also likely to create entirely new categories of legal jobs in the future—perhaps legal data analyst or machine learning legal specialist—that are today hard to imagine," Surden says.

"Like all technological revolutions, the future of law influenced by AI will not necessarily be good or bad overall for the profession. The only thing that we can count on is that it will be different."

 

This story originally appeared in the spring 2019 issue of Amicus. From films to news headlines, artificial intelligence, or AI, is often portrayed as a threat to many modern professions. It's only logical, then, for lawyers to wonder: Should they be worried or enthusiastic? Will AI take over the legal profession as we know it—or will it bring more access to legal services and enable improved lawyering?

Off

Traditional 0 On White ]]>
Fri, 03 May 2019 21:06:23 +0000 Anonymous 8361 at /law
Explainable Artificial Intelligence: Can We Hold Machines Accountable? A Q&A with Professors Surden and Kaminski /law/2019/04/29/explainable-artificial-intelligence-can-we-hold-machines-accountable-qa-professors-surden Explainable Artificial Intelligence: Can We Hold Machines Accountable? A Q&A with Professors Surden and Kaminski Anonymous (not verified) Mon, 04/29/2019 - 14:15 Categories: Harry Surden Margot Kaminski News Tags: Faculty Activities 2019 homepage news

Harry Surden and Margot Kaminski, associate professors at the University of Colorado Law School, are leaders in exploring the future of AI and how technologies using computer-based decision making offer major prospects for breakthroughs in the lawand how those decisions are regulated.

They organized a May 3 conference at Colorado Law titled "Explainable Artificial Intelligence: Can We Hold Machines Accountable?" The conference was hosted by the law school’s Silicon Flatirons Center, of which Surden serves as interim executive director and Kaminski as faculty director for its privacy initiative.

We sat down with Surden and Kaminski to get their take on explainable AI and how humans can help guide computers to fulfill their intended purpose: to serve us well.

Harry Surden

Margot Kaminski

 

 

 

 

 

 

 

Let’s begin with a definition. What is "explainable" AI?

Kaminski: Explainable AI is AI that provides an explanation of why or how it arrives at a decision/output. What this means, though, depends on whether you ask a lawyer or a computer scientist. This discrepancy is part of what inspired this conference. A lawyer may be interested in different kinds of explanation than a computer scientist, like an explanation that provides insights into whether a decision is justified, whether it is legal, or allows a person to challenge that decision in some way.

What problem is AI explainability trying to solve?

Kaminski: What problem you’re trying to address with explanations can really influence how valuable you think they are, or what form you think they should take. For example, some people focus on the instrumental values of explanations: catching and fixing error or bias or discrimination. Others focus on the role of explanations in preserving human dignity, providing people with the ability to push back against automated decisions and maintain autonomy of some kind. We expect a healthy debate over this at the conference.

Surden: There are certain core legal valuesjustice, fairness, equality in treatment, due process. To the extent that AI is being used in legal determinations or decisions (e.g., criminal sentencing), there is some sense that legal norms such as justifying government decisions or providing rational or reasonable explanations should be part of that process. One line of thinking is that having AI systems provide explanations might help foster those norms where they are absent today in AI-influenced government determinations.

Legal scholars note that "black box" decision making raises problems of fairness, legitimacy, and error. Why is this concerning to lawyers, governments, policymakers, and others who may be implementing AI in their business practices?

Kaminski: AI decision making is being deployed across the economy and in the government, in areas from hiring and firing to benefits determinations. On the one hand, this can be a good thing: adding statistical analysis into public policy decisions isn’t inherently bad, and can replace human bias. On the other hand, though, there is the real problem of "automation bias," which indicates that humans trust decisions made by machines more than they trust decisions made by other humans. When people use AI to facilitate decisions or make decisions, they’re relying on a tool constructed by other humans. Often they don’t have the technical capacity, or the practical capacity, to determine whether they should be relying on those tools in the first place.

Surden: Part of the legitimacy of the legal system depends upon people believing that they are being fairly, and equally treated, and that government decisions are happening for justifiable reasons. To the extent that AI is being used in government decision making, but remains opaque or inscrutable, this may undermine trust in the legal system and/or government.

Judges increasingly rely on AI systems when making bail or sentencing decisions for criminal defendants, as Professor Surden describes in this . What potential issues, such as racial bias, does this raise? More broadly, how do we avoid feeding biased data to our machine learning systems?

Surden: One of the problems is that the term "bias" itself has many different meanings in different contexts. For example, in computer science and engineering, "bias" is often used as a technical term that often means something akin to "noise" or "skew" in data. But it doesn’t have any sociological or societal meaning in that usage. By contrast, in sociological contexts and in everyday use, "bias" often has a meaning that connotes improper discrimination or treatment of historically oppressed minority groups. There are other, more nuanced meanings as well of bias. While many of these variants of "bias" can exist in AI systems and data, one problem is simply identifying which variants we are talking about or concerned with in any given conversation. Another major issue is that there are many different ways to measure whether data or AI systems are "biased" in improper ways against particular societal groups. It is contested what is the appropriate approach to reduce harm, and what is the "fairest", and this needs to be part of a larger social dialog.

In a , Professor Kaminski points out that the decisions made by machine-learning algorithms, which are used to make significant decisions about individuals from credit determination to hiring and firing, remain largely unregulated under U.S. law. What might effective regulation look like?

Kaminski: To effectively regulate AI, we have to figure out why we want to regulate it. What’s the problem we’re trying to solve? Senators Wyden and Booker just proposed legislation in the United States that would require companies to perform "Algorithmic Impact Assessments" and do risk mitigation around AI bias. That’s great if your only concern is instrumentalfixing bias, but not addressing human dignitary or justificatory concernsand if you trust that agency enforcement is strong enough that companies will self-regulate without individual challenges. My answer in a nutshell to this is question is: we probably need to do both. We need both a regulatory, systemwide, ex-ante approach to AI biases, and some form of individual transparency or even contestability, to let affected individuals push back when appropriate.

Some claim that transparency rarely comes for free and that there are often tradeoffs between AI’s "intelligence" and transparency. Does AI need to be explainable to be ethical?

Surden: I think that explainability is just one avenue that scholars are pursuing to help address some of the ethical issues raised by the use of AI. I think there are several things we don’t know at this point. First, we don’t know, as a technical matter, whether we will even be able to have AI systems produce explanations that are useful and satisfactory in the context of current law. In the current state of the art, many AI "explanations" are really just dry technical expositions of data and algorithmic structures, and not the kind of justificatory narratives that many people imagine when they hear the term "explanation." So the first issue is whether suitable "explanations" are even achievable as a technological matter in the short term. The longer term questions iseven if AI "explanations" are technically achievable, I don’t think we know to what extent they will even usually solve or address some of the ethical issues that we see today in AI’s public use. It may turn out that we produce useful explanations, but that the “explanation” issue was just a minor problem compared to larger societal issues surrounding AI.  Improving "explanation" is just a hypothesis that many scholars are exploring.

Harry Surden and Margot Kaminski, associate professors at the University of Colorado Law School, are leaders in exploring the future of AI and how technologies using computer-based decision making offer major prospects for breakthroughs in the law—and how those decisions are regulated. They are organizing a May 3 conference titled "Explainable Artificial Intelligence: Can We Hold Machines Accountable?"

Off

Traditional 0 On White ]]>
Mon, 29 Apr 2019 20:15:36 +0000 Anonymous 8343 at /law
Surden on Ethics of Artificial Intelligence: Can We Teach Computers to be Good People? /law/2019/03/26/surden-ethics-artificial-intelligence-can-we-teach-computers-be-good-people Surden on Ethics of Artificial Intelligence: Can We Teach Computers to be Good People? Anonymous (not verified) Tue, 03/26/2019 - 17:18 Categories: Harry Surden News Tags: Faculty Activities 2019 homepage faculty news When people think about artificial intelligence, or AI, they can be quick to jump to the all-too-common sci-fi depiction of a heartlessly rational computer willing to kill people to fulfill its programming. Real AI is lightyears away from that. Today, AI is still pretty far from basic things humans can accomplish, like being able to grasp abstract concepts, according to Harry Surden, a University of Colorado Law School professor and AI expert. window.location.href = `/today/2019/03/25/ethics-artificial-intelligence`;

Off

Traditional 0 On White ]]>
Tue, 26 Mar 2019 23:18:17 +0000 Anonymous 8217 at /law
Harry Surden: The Ethics of Artificial intelligence: Teaching Computers to Be Good People (Phys.org News) /law/2019/03/26/harry-surden-ethics-artificial-intelligence-teaching-computers-be-good-people-physorg Harry Surden: The Ethics of Artificial intelligence: Teaching Computers to Be Good People (Phys.org News) Anonymous (not verified) Tue, 03/26/2019 - 00:00 Categories: Faculty in the News Harry Surden Tags: 2019 window.location.href = `https://phys.org/news/2019-03-ethics-artificial-intelligence-good-people.html`;

Off

Traditional 0 On White ]]>
Tue, 26 Mar 2019 06:00:00 +0000 Anonymous 8281 at /law