Article count:1087 Read by:1556316

Featured Content
Account Entry

How to get started with artificial intelligence if you have a poor foundation in programming and mathematics?

Latest update time:2017-12-14
    Reads:

1. Current Status of AI Development

1.1 Concept

According to Wikipedia, artificial intelligence is intelligence exhibited by machines, as opposed to the natural intelligence of humans and other animals. In computer science, AI research is defined as "agent software programs": any device that can perceive its surroundings and maximize its chances of success.

1.2 Major events

  • In March 2016 , AlphaGo competed in a man-machine Go match against Lee Sedol, a professional 9th-dan player ranked fourth in the world at the time, and won with a total score of 4:1.

  • In October 2016 , the White House released two major reports, "Preparing for the Future of Artificial Intelligence" and "U.S. National Artificial Intelligence Research and Development Strategy Plan", which detailed the United States' future artificial intelligence development plan and the challenges and opportunities that artificial intelligence brings to government work.

    VentureBeat summarized these two reports and came up with seven easy-to-understand key points:

    1. Artificial intelligence should be used to benefit humanity;

    2. Governments should embrace artificial intelligence;

    3. Autonomous cars and drones need to be regulated;

    4. All children should keep up with the development of technology;

    5. Use AI to supplement, not replace, human workers;

    6. Eliminate bias in data or do not use biased data;

    7. Consider security and global impact.

  • On 2016 Double 11 , Luban served Double 11 for the first time and produced 170 million product display ads, increasing product click-through rate by 100%. If it were all done by designers, assuming that each picture takes 20 minutes, it would take 100 designers 300 years to do it.

    In 2017, Luban's design level has been significantly improved. He has learned the creative content of millions of designers and has the ability to evolve hundreds of millions of designs. In addition, Luban has achieved the ability to produce 40 million posters a day, and no two of them are exactly the same.

  • In May 2017 , AlphaGo Master defeated world champion Ke Jie.

  • On October 18, 2017 , the DeepMind team announced the most powerful version of AlphaGo, code-named AlphaGo Zero.

  • On October 25, 2017 , at the Future Investment Initiative Conference held in Saudi Arabia, Saudi Arabia granted citizenship to Sophia, a "female" robot produced by Hanson Robotics of the United States.

    As the world's first robot to obtain citizenship, Sophia said that day that "she" hopes to use artificial intelligence to "help humans live a better life", and at the same time told Musk, who supports the "AI threat theory", "I will not offend others unless they offend me!"

    After the meeting, Musk tweeted: "What could be worse than feeding the movie "The Godfather" into an AI system?" The Godfather is a classic Hollywood movie with a plot full of betrayal and murder.

    The ethical issues that arise after Sofia is granted citizenship are also something that people have to consider.

    There have been too many big news in the field of artificial intelligence in recent years, so I won’t list them all here.

2. What is the relationship between artificial intelligence, deep learning, machine learning, and reinforcement learning?

As shown in the figure, artificial intelligence is a large category, including expert systems, knowledge representation, machine learning, etc., among which machine learning is currently the hottest and best developed branch. Machine learning includes supervised learning, unsupervised learning, deep learning, reinforcement learning, etc.

Supervised learning , which is often referred to as classification, uses existing training samples (that is, known data and its corresponding outputs) to train an optimal model (this model belongs to a set of functions, and optimal means it is the best under a certain evaluation criterion).

This model is then used to map all inputs to corresponding outputs, and a simple judgment is made on the outputs to achieve the purpose of classification, thus gaining the ability to classify unknown data.

For example, when we were in kindergarten, we often did an activity called Reading pictures and learning words . As shown in the picture above, the teacher would show us a lot of pictures with words underneath. After a long time, we would form abstract concepts in our brains, such as two horns, a short tail, and fat (characteristics)...

This kind of animal is a cow; the round, yellow, shining, hanging in the sky is the sun; people look like this. When we see similar things again, we can recognize them, even if they are not exactly the same as what we have seen before, but they conform to the concept formed in our brains, as shown in the figure below.

Unsupervised learning is another widely studied learning method. It differs from supervised learning in that we do not have any training samples in advance, but need to model the data directly.

For example, as shown in the figure, without any prompts (no training set), you need to divide the following six figures into two categories. How would you divide them? Of course, one category for the first row and one category for the second row, because the shapes in the first row are closer and the shapes in the second row are closer.

Unsupervised learning is about finding features in data without knowing the classification of the data set.

Deep learning is a new field based on machine learning. It originates from the neural network algorithm inspired by the structure of the human brain and develops with the increasing depth of the model structure, and a series of new algorithms are generated with the improvement of big data and computing power.

The concept of deep learning was proposed and promoted by the famous scientist Geoffrey Hinton and others in articles published in "Sciences" and other journals in 2006 and 2007.

Deep learning, as an extension of machine learning, is applied in image processing and computer vision, natural language processing, and speech recognition.

Since 2006, the research and application of deep learning in the academic and industrial fields have made breakthrough progress in the above fields. For example, the classic object recognition competition in images with ImageNet as the database defeated all traditional algorithms and achieved unprecedented accuracy.

Reinforcement learning is also an important branch of machine learning. It is to learn how to perform actions through observation. Each action will have an impact on the environment, and the learning object makes judgments based on the feedback from the surrounding environment.

3. How important is the mathematical foundation?

For basic mathematical knowledge, you need high school mathematics knowledge plus advanced mathematics, linear algebra, statistics, and probability theory. Even if you don't master it very well, you should at least know the concepts and know where to look them up when you need them.

If you don't have a good foundation, you can first read Wu Jun's "The Beauty of Mathematics", which is easy to understand. You can also learn by doing. Practice is the only criterion for testing truth. After all, most people still focus on engineering practice. If you want to be a scientist who studies theory, this article is not suitable for you.

4. Entry-level machine learning algorithms

4.1 Decision Tree

A decision tree is a tree structure similar to a flowchart: each internal node represents a test on an attribute, each branch represents an attribute output, and each leaf node represents a class or class distribution. The top level of the tree is the root node.

For example, there is a data set that shows the age, income, whether they are students, credit, and whether they will buy a computer. Ages include young, middle-aged, and old; incomes include high, medium, and low; credit includes average and very good. The data is saved in AllElectronics.csv.

Now there is a new person (data), and we need to judge whether this person will buy a computer.

allElectronicsData = open(r'D:\deeplearning\AllElectronics.csv', 'rb') reader = csv.reader(allElectronicsData)   headers = reader.next() print(headers) featureList = [] labelList = [] #The last column for row in reader:    #print(row)    labelList.append(row[len(row)-1]) #Add elements to the end of the tuple    rowDict = {}    for i in range(1,len(row)-1):        rowDict[headers[i]] = row[i]    featureList.append(rowDict) print(featureList) print(labelList) vec = DictVectorizer() dummyX = vec.fit_transform(featureList).toarray()   print("dummyX:" + str(dummyX)) print(vec.get_feature_names()) lb = preprocessing.LabelBinarizer() dummyY = lb.fit_transform(labelList) print("dummyY:" + str(dummyY))   clf = tree.DecisionTreeClassifier(criterion='entropy') clf = clf.fit(dummyX,dummyY) print("clf: " + str(clf)) with open("allElectronicInformationGainDri.dot",'w') as f:    f = tree.export_graphviz(clf,feature_names=vec.get_feature_names(),out_file = f) #Generate .dot file in the current working directory oneRowX = dummyX[0, :] print("oneRowx: " + str(oneRowX)) newRowX = oneRowX newRowX[0] = 1 newRowX[2] = 0 print("newRowX: " + str(newRowX)) predictedY = clf.predict(newRowX) print("predictedY:" + str(predictedY))  

4.2 Nearest Neighbor Sampling

Nearest neighbor sampling is to divide the existing data into several categories, calculate the distance between the newly input data and the known data, and classify the new data into the category that the distance is closer to. For example, the movie classification shown in the figure below, for the last row of movies of unknown movie type, based on the number of fights and kisses, it is closer to the romantic type and should be classified as a romantic movie.

Example: irisdata.txt is a dataset of iris plants downloaded from the Internet. Based on the dataset, the new data is classified

   # coding:utf-8    #Do not call the library, implement the knn algorithm yourself    import csv #Module for reading CSV files, used to read data    import random #random number calculation    import math #mathematical calculations    import operator    from bokeh.util.session_id import random    from boto.beanstalk import response    from dask.array.learn import predict    # Load the data set filename: data set file name split: take a certain position in the data set as a node and divide the data set into trainingSet and testSet    def loadDataSet(filename, split, trainingSet=[], testSet=[]):        with open(filename, 'rb') as csvfile:            lines = csv.reader(csvfile) #Save all lines in lines            dataset = list(lines) #Convert the data into list format            for x in range(len(dataset)-1):                for y in range(4):                    dataset[x][y] = float(dataset[x][y])                if random.random() < split: #If the random value is less than split                    trainingSet.append(dataset[x]) #Add to trainingSet                else:                    testSet.append(dataset[x])    #Euclidean distance: square root of the sum of the squares of the coordinate differences and Manhattan distance    def euclideanDistance(instance1, instance2, length):          distance = 0        for x in range(length):            distance += pow((instance1[x] -instance2[x]), 2)        return math.sqrt(distance)    #Return the K neighbors of trainingSet that are closest to testInstance    def getNeighbours(trainingSet, testInstance, k):        distances = []        length = len(testInstance) - 1        for x in range(len(trainingSet)):            dist = euclideanDistance(testInstance, trainingSet[x], length) #The distance between each training set data and instance data            distances.append((trainingSet[x],dist))        distances.sort(key=operator.itemgetter(1)) #sort from small to large    #Get the first k nearest neighbors            neighbors = []        for x in range(k):              neighbors.append(distances[x][0])          return neighbors    #Based on the principle of majority rule, determine which category the instance to be predicted belongs to. Calculate the number of instances with the closest distance from testInstance to trainingSet, and return the category with the most instances.    def getResponse(neighbors):        classVotes = {}        for x in range(len(neighbors)):            response = neighbors[x][-1]            if response in classVotes:                classVotes[response] += 1            else:                classVotes[response] = 1        sortedVotes = sorted(classVotes.iteritems(), key=operator.itemgetter(1), reverse=True)        return sortedVotes[0][0]    #Get the prediction accuracy testSet: test data set predictions: the category set predicted by the code    def getAccuracy(testSet, predictions):        correct = 0        for x in range(len(testSet)):            if testSet[x][-1] == predictions[x]: #-1 indicates the last value of the array.                correct += 1        return (correct / float (len (testSet))) * 100.0    def main():        trainingSet=[]        testSet=[]        split = 0.67 #Two-thirds for training set, one-third for data set        loadDataSet(r'C:\Users\ning\workspace\KNNdata\irisdata.txt', split, trainingSet, testSet)        print 'Train Set: ' + repr(len(trainingSet)) #repr converted to string        print 'Test Set: ' + repr(len(testSet))        predictions = []        k = 3        for x in range(len(testSet)):            neighbors = getNeighbours(trainingSet, testSet[x], k)            result = getResponse(neighbors)            predictions.append(result)            print("> predicted=" + repr(result) + ', actual=' + repr(testSet[x][-1]))        accuarcy = getAccuracy(testSet, predictions)        print('Accuracy: ' + repr(accuarcy) + '%')    main()

4.3 Support Vector Machine

Support vector machine (SVM) is developed from the optimal classification surface in the case of linear separability. The optimal classification surface requires that the classification line can not only correctly separate the two categories (training error rate is 0), but also maximize the classification interval.

SVM considers finding a hyperplane that meets the classification requirements and makes the points in the training set as far away from the classification surface as possible, that is, finding a classification surface to maximize the blank area (margin) on both sides of it.

The training samples of these two types of samples that are closest to the classification surface and on the hyperplane H1 and H2 parallel to the optimal classification surface are called support vectors.

Example: Using the sklearn library to implement the SVM algorithm is commonly known as library tuning. In fact, library tuning is a very simple process, and you don’t even need to know the principle in the early stages.

   # coding:utf-8    from sklearn import svm    X = [[2,0], [1,1], [2,3]]    y = [0,0,1]    clf = svm.SVC(kernel = 'linear')    clf.fit(X,y) #ͨAll parameters of the support vector machine can be calculated through the .fit function and saved in clf    print clf    # get support vectors    print clf.support_vectors_    #get index of support vectors    print clf.support_    #get number of support vectors for each class    print clf.n_support_    #predict data, the parameter is a two-dimensional array    print clf.predict([[2, 0], [10,10]])

5. Book list recommendation

  • The Beauty of Mathematics by Wu Jun

  • Machine Learning by Zhou Zhihua

  • 《Talking about Artificial Intelligence》 Jizhi Club

  • Machine Learning in Action by Peter Harrington

  • "TensorFlow Technical Analysis and Practice" by Li Jiaxuan

  • Statistical Learning Methods Li Hang

6. Misconceptions in learning artificial intelligence—Is artificial intelligence another bubble?

Artificial intelligence has been greatly exaggerated by some technology giants. This is understandable in order to obtain capital, but the general public must have their own ability to distinguish and objectively analyze whether they are suitable for this industry.

Looking at the history of the development of the Internet, this development trend of artificial intelligence is not the first example. For example, the O2O model that became popular in 2014. At that time, if you didn’t know anything about O2O, you wouldn’t dare to say that you were a member of the Internet circle.

Up to now, batch after batch of entrepreneurs have fallen, and of course there will be giants like Amazon and Alibaba left. Every industry has its pyramid .

When I was a sophomore, 3D printing and VR technology were at the forefront. Various 3D printing startups and VR startups emerged one after another. In my senior year, one after another had already gone bankrupt. I also worked on 3D printing projects, but what I actually made was just some minor improvements. The core framework had already been designed by the big guys.

If we blindly follow the trend of technology, we will always lag behind.

Recently, I saw that there was an artificial intelligence variety show hosted by Sa Beining on CCTV. This shows that artificial intelligence has already become a red ocean and is not fundamentally different from the current mobile Internet technology.

Since Google open-sourced the tensorflow framework (and there are many other excellent frameworks), writing machine learning code mostly involves adjusting parameters, and some people don’t even need to know the principles. Of course, there are definitely experts out there, but as I said, every industry has its own pyramid , but the paths to the top are different.

In my opinion, there is no essential difference between using the tensorflow framework to develop artificial intelligence and using the android API to develop apps. The truly great company is Google, and latecomers are just followers.

Off topic, I wonder if you have heard that the 21st century is the century of biology . When this concept emerged, many college entrance examination candidates chose biology-related majors. Previously, there was a survey on the employment destinations of biology graduates from a famous domestic university. One of the conclusions was that the best way out for biology students is to leave this major.

Of course, we have to say that biotechnology is closely related to the lives of each of us, but its development cycle is so long that how can one person wait? How to coordinate and unify personal identity with social identity, self-value and social value is also a question we need to think about.

Is artificial intelligence a bubble? How long will this concept last?

The content of Part 6 is purely personal opinion and is for reference only.

Lao Luo recites a poem for you: http://106.75.37.23/mgmt/activity/book/5a1e67e6387c5b4ee3514dab


 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号