h+ Magazine

We Need a Legal Definition of Artificial Intelligence

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #28695
    Peter
    Member

    When we talk about artificial intelligence (AI) – which we have done lot recently, including my outline on The Conversation of liability and regulation issues – what do we actually mean?

    [See the full post at: We Need a Legal Definition of Artificial Intelligence]

    #28706
    Peter
    Participant

    While true that definition is totally useless as far as AI goes.

    Biological animals are evolved to deal with a wide range of environments and have complex goals in those enviornments so we can easily just look at how well they accomplish their goals to check if they are intelligent. Nothing of the kind can be said for AI.

    For instance, consider an AI which had a hard coded restriction keeping it from doing anything but proving mathematical theorems. Such a program doesn’t (in one sense) have any goals in any other environment so would seemingly fail the test. Yet, intuitively, if we took a general AI and simply crippled it by denying it input and forcing it to only output theorems it would be intelligent. It could even be plotting (manipulating data structures) in the dark about what it will do if it ever gets out but because the box it is trapped in isn’t distinguishable from itself the test fails.

    Worse, this definition doesn’t even point us to the kind of things we might want to regulate. Imagine a program that really couldn’t even comprehend the external world, i.e., represent itself as part of some external reality and react to it’s situation. This program isn’t plotting in the dark, there is no dark and it couldn’t even grasp the idea of being trapped. However, it’s really damn good at two things: proving theorems and taking huge sets of data (along with an indication of what properties we find significant) and coming up with accurate predictions of how future data depends on certain other data points. In other words it inducts.

    Given this machine has such a narrow focus it seems we would not call such a system intelligent. However, anyone can basically point the system at some task they want to accomplish (kill the enemy, overthrow the government) and say “sick” since the algorithm does all the work agents find hard but just lakes an understanding of itself as an agent.

    Even if you ignore all this no observed capability approach can ever truly work for the purposes of regulation. After all if all an algorithm need to do to escape classification as an AI (and that carries some substantial restrictions) is appear to be dumb than, the Ais we should really be worried about will be able to duck out of the category.

    Not to mention we can’t really define what either goals or effectiveness in achieving thm is in a way that would apply to an AI. There are things I want to achieve yet find myself psychologically unable…what would that even mean in terms of code?

    #28707
    Peter
    Participant

    Or more simply. The paper takes it for granted that we can identify (and modify) the goals (or secondary goals) of an AI simply by giving `reward feedback’ in different situations. This lets them both apply the program to many problems and infallibly determine it’s goals.

    Neither makes sense. Maybe what we think is the reward button isn’t a reward and the program might be hardcoded to only do a few things.

Viewing 3 posts - 1 through 3 (of 3 total)
  • You must be logged in to reply to this topic.