Friday , November 15 2019
Home / Uncategorized / How to visualize decision trees

How to visualize decision trees




How to visualize decision trees















Terence Parr and Prince Grover

(Terence teaches at the University of San Francisco MS in the Data Science program and Prince is a student, you might know Terence as the creator of the ANTLR parser generator.)

Please send comments, suggestions or corrections to Terence.

Decision trees are the key building blocks for boost boost and Random Forests ™ machines, probably the two most popular machine learning models for structured data. Viewing decision trees is a huge help when learning how these models work and when interpreting models. Unfortunately, the current display packages are rudimentary and not immediately useful for beginners. For example, we could not find a library that displays how decision nodes divide the feature space. Furthermore, it is rare for libraries to support the display of a vector of specific characteristics as it entwines through the decision-making nodes of a tree; we could only find an image that shows this.

Therefore, we have created a general package for scikit: we learn the tree view of the decisions and the interpretation of the models, which we will use heavily in an upcoming machine learning book (written with Jeremy Howard). Here is an example view for a small decision tree (click to enlarge):

This article illustrates the results of this work, describes in detail the specific choices we have made for visualization and outlines the tools and techniques used in the implementation. The visualization software is part of a nascent Python learning library called animl. We assume that you are familiar with the basic mechanism of decision trees if you are interested in viewing them, but let's start with a brief summary so that we all use the same terminology. (If you are not familiar with decision trees, refer to the Introduction of MOA to Machine Learning for Fast.ai Coders.)

Review of the decision tree

A decision tree is a machine learning model based on binary trees (trees with at most one left and right child). A decision tree learns the relationship between observations in a training set, represented as features vectors X and target values y, examining and condensing training data in a binary tree of internal nodes and leaf nodes. (Notation: the vectors are in bold and the scalars are in italics.)

Each leaf in the decision tree is responsible for making a specific prediction. For regression trees, prediction is a value, like price. For classification trees, the forecast is a target category (represented as a whole in the scikit), such as cancer or non-cancer. A decision tree crops the feature space into groups of observations that share similar target values ​​and each leaf represents one of these groups. For regression, the similarity in a leaf means a low variance between the target values ​​and, for classification, means that most or all of the targets are of a single class.

Any path from the root of the decision tree to a specific leaf predictor passes through a series of decisional nodes (internal). Each decision node compares the value of a single function in X, XI, with a specific one division point value learned during training. For example, in a model that predicts apartment rental prices, decision nodes will test features such as the number of bedrooms and the number of bathrooms. (See Section 3 Visualization of the tree interpretation of a single observation.) Even in a classifier with discrete target values, decision nodes continue to compare numerical values feature values ​​because the implementation of the scitkit decision tree assumes that all functions are numeric. Categorical variables must be hot-coded, categorized, labeled, etc.

To train a decision node, the model examines a subset of the training observations (or the complete set of root training). The characteristic of the node and the point of division within the space of this feature are chosen during training to divide the observations into buckets (subsets) of left and right to maximize the similarity as defined above. (This selection process is generally performed through a comprehensive comparison of features and functionality values.) The left bucket contains observations of which XI the values ​​of the features are all lower than the split point and the right bucket has the observations of which XI it is greater than the division point. The construction of the tree proceeds recursively creating decision nodes for the left bucket and the right bucket. Construction stops when a stop criterion is reached, such as having fewer than five observations in the node.

The key elements of the decision tree view

The views of the decision tree should highlight the following important elements, which we demonstrate below.

  • Decision-making node functionality with respect to the target value distributions (which we call space target-functionality in this article). We want to know how separable objective values ​​are based on function and on a division point.
  • Decision-making node function name and value divided function. We need to know which characteristic each decision node is testing and where in that space the nodes divide the observations.
  • Purity of the leaf node, which influences our confidence in forecasts. Leaves with low variance between target values ​​(regression) or an overwhelming majority of the target class (classification) are much more reliable predictors.
  • Forecast value of the leaf node. What does this leaf actually expect from the collection of target values?
  • Number of samples in decision nodes. Sometimes it is useful to know where all the samples are routed through the decision nodes.
  • Number of samples in the leaf nodes. Our goal is a decision tree with fewer, larger and purer leaves. Nodes with too few samples are possible indications of supercharging.
  • As is a specific feature vector run down the tree to a leaf. This helps to explain why a particular feature vector gets the prediction it makes. For example, in a regression tree that provides apartment rental prices, we might find a vector of characteristics routed into a leaf with a high price, due to a decision node that controls more than three bedrooms.

Decision tree view gallery

Before digging into previous views, we would like to give a little spoiler to show what is possible. This section highlights some sample visualizations we have constructed from scikit regression and decision tree classification on some data sets. You can also check the complete gallery and the code to generate all the samples.

A comparison with previous views of the state of the art

If you are looking for "viewing decision trees" you will quickly find a Python solution provided by the wonderful scikit: sklearn.tree.export_graphviz. With more work, you can find views for R and even SAS and IBM. In this section, we collect the various views of decision trees that we could find and compare them with the views made by our own animl library. Let's have a more detailed discussion of our views in the next section.

Let's start with the default scitkit view of a decision tree on the well-known Iris data set (click on the images to enlarge them).

Default view of Iris scikitOur animl Viewing the iris

The scikit tree does a good job of representing the tree structure, but we have some quibbles. The colors are not the best and it is not immediately obvious why some of the nodes are colored and some are not. If the colors represent the class expected for this classifier, it would be thought that only the leaves would be colored because only the leaves have forecasts. (It turns out that the non-colored knots do not have majority predictions.) Including the gini coefficient (certainty score) costs space and does not help with the interpretation. The sample count of the various target classes in each node is somewhat useful, but a histogram would be even better. A legend of the color of the target class would be nice. Finally, using true and false, the border labels are not as clear as, for example, labels is . The most obvious difference is that our decision nodes show feature distributions as superimposed superimposed histograms, a histogram by target class. Furthermore, our leaf size is proportional to the number of samples in that leaf.

Scikit uses the same visualization approach for decision tree regressors. For example, here is the display of scikit using the Boston data set, with animlThe version for comparison (click to enlarge the images):

Default view of Scikit BostonOur animl View of Boston

In the scikit tree, it is not immediately clear what the use of color implies, but after studying the image, darker images indicate higher predicted target values. As before, our decision nodes show the distribution of feature space, this time using a function with respect to a value of dispersion of the target value. The leaves use striped diagrams to show the distribution of the reference value; the leaves with more points naturally have more samples.

R programmers also have access to a package for viewing decision trees, which provides similar results to Scikit but with nicer edge labels:

SAS is IBM also provide views of the decision tree (not based on Python). Starting with SAS, we see that their decision nodes include a bar graph relative to the sample target values ​​of the node and other details:

SAS viewSAS view (the best image quality we could find with numeric functions)

Indicate the size of the left and right cups by the width of the edge is a nice touch. But those bar charts are difficult to interpret because they do not have a horizontal axis. Decision nodes that test categorical variables (left image) have exactly one bar per category, so they must represent simple category counts, rather than feature distributions. For numeric functions (image on the right), SAS decision nodes show a histogram of a target space or feature space (we can not distinguish from the image). The bar graphs of the SAS node / histograms seem to illustrate only the destination values, which tells us nothing about how the feature space was subdivided.

The SAS tree on the right seems to highlight a path through the decision tree for a specific unknown feature vector, but we could not find other examples from other tools and libraries. The ability to display a specific vector performed along the tree does not seem to be generally available.

Turning to the IBM software, here is a nice visualization that also shows the counts of the categories of decisional nodes as bar charts, from the Watson analysis of IBM (on the TITANIC dataset):

IBM is precedent SPSS the product also had decision tree views:

SPSS displaySPSS display

These SPSS decision nodes seem to provide the same SAS-like bar graph as the sample goal class counts.

All the visualizations that we met from the main actors were useful, but we were more inspired by the amazing visualizations in A visual introduction to machine learning, which shows an (animated) decision tree like this:

This view has three unique characteristics compared to the previous work, in addition to the animation:

  • decision nodes show how the feature space is divided
  • the division points for decision nodes are visually shown (like a wedge) in the distribution
  • the leaf size is proportional to the number of samples in that leaf

While that visualization is a coded animation for educational purposes, it points in the right direction.

Our decision tree views

In addition to the educational animation in a visual introduction to machine learning, we could not find a decision tree view package that illustrates how the function space is divided into decision nodes (target-target space). ). This is the critical operation performed during the training of the decision tree model and is what newcomers should focus on, so we will start by looking at the views of decision nodes for the classification and regression trees.

Visualization of the target-target space

The training of a decision node chooses the characteristic XI and divide the value inside XIThe range of values ​​(function space) of the group to group samples with similar target values ​​in two intervals. To be clear, training involves examining the relationship between characteristics and objective values. Unless the decision nodes show somehow the target-target space, the viewer can not see how and why the training has arrived at the split value. To highlight how decision nodes cut out feature space, we have trained a regressor and a classifier with a single function (AGE) (code to generate images). Here is a decision-making regressor tree trained using a single function from the Boston data, AGEand with the label ID of the node activated for discussion purposes here:

Horizontal dotted lines indicate the target average for left and right buckets in decision nodes; a vertical dotted line indicates the division point in the function space. The black wedge highlights the division point and identifies the exact division value. Leaf nodes indicate the target forecast (average) with a dotted line.

As you can see, each one AGE The feature axis uses the same range, rather than the zoom, to simplify the comparison of decision nodes. As we descend through decision nodes, the champion AGE the values ​​are enclosed in narrower and narrower regions. For example, the AGE the feature space in node 0 is divided into the regions of AGE future space shown in nodes 1 and 8. The space of node 1 is then subdivided into the pieces shown in nodes 2 and 5. The prediction leaves are not very pure because the training of a model on a single variable leads to a poor model, but this limited example demonstrates how decision trees cut out feature space.

While the implementation of the decision tree is practically the same for decision-making trees of both classifier and regressor, the way in which we interpret them is very different, so our visualizations are distinct for the two cases. For a regressor, it is best to show the target space of the features with a dispersion graph of functionality with respect to the target. For classifiers, however, the target is a category rather than a number, so we chose to illustrate the target-target space using histograms as an indicator of feature space distributions. Here is a classification tree trained on USER KNOWLEDGE data, always with a single function (PEG) and with the nodes labeled for discussion purposes:

Ignoring the color, the histogram shows the distribution of the PEG function space. The addition of color gives us an indication of the relationship between the function space and the target class. For example, in node 0 we can see which samples with very low the target classes are grouped at the lower end of the PEG function space and samples with high the target classes are grouped in the high end. As with the regressor, the space of a left child's functions is all to the left of the parent's division point in the same function space; in the same way for the right child. For example, by combining the histograms of nodes 9 and 12 we obtain the histogram of node 8. We force the interval of the horizontal axis to be the same for all PEG decisional nodes so that decision nodes lower in the tree are clearly enclosed in narrower regions that are more and more pure.

We use a stacked histogram so that the overlap is clear in the function space between samples with different target classes. Note that the height of the Y axis of the stacked histogram is the total number of samples of all classes; more class counts are stacked one above the other.

When there are more than four or five classes, stacked histograms are difficult to read, so we recommend setting the histogram type parameter to bar not barstacked in this case. With high cardinality target categories, overlapping distributions are more difficult to visualize and things fail, so we set a limit of 10 target classes. Here is an example of a shallow tree that uses the Digit data set of 10 classes using non-stacked histograms:

It concerns the details

So far we have skipped many of the visual cues and details that we have obsessed with during the construction of the library and so we have reached the key elements here.

Our views of the categorizer tree use the node size to provide visual cues for the number of samples associated with each node. Histograms become proportionally shorter when the number of samples in the node decreases and the diameter of the leaf nodes decreases. The feature space (horizontal axis) always has the same width and the same interval for a given characteristic, making it much easier to compare the target spaces of the different nodes. The bars of all the histograms have the same width in pixels. We only use start / stop interval labels for both horizontal and vertical axes to reduce the overall dimensions.

We use a pie chart for filing leaves, despite their bad reputation. In order to indicate purity, the viewer needs only an indication if there is a single strong majority category. The viewer does not need to see the exact relationship between the pie chart elements, which is a key area where pie charts fail. The color of the majority slice of the pie chart provides the prediction of the leaf.

Turning now to the regressor trees, we make sure that the (vertical) target axis of all decision nodes has the same height and the same interval to facilitate comparison of the nodes. The function space of the Regressor (horizontal axis) always has the same width and the same interval for a given function. We set a low alpha for all the points in the scatter plot so that the increase in the density of the nominal value corresponds to a darker color.

The Regressor leaves also show the same range vertically for the target space. We use a stripe chart rather than, for example, a box plot, because the bar chart explicitly shows the distribution while implicitly showing the number of samples by the number of points. (We also write the number of samples in the text for the leaves.) The leaf prediction is the center of mass distribution (average) of the bar graph, which we highlight with a dotted line.

There are also a number of different details that we think can improve the quality of the diagrams:

  • The classifiers include a legend
  • All colors have been carefully selected from color-blind palettes, a palette carefully selected by number of target categories (from 2 to 10)
  • We use a gray text instead of black for the text because it is easier for the eyes
  • The lines are thin lines
  • We draw bar profiles in bar charts and sections in pie charts

Visualization of the tree interpretation of a single observation

To understand how the training of the model arrives at a specific tree, all the action is in the subdivisions of the space of the characteristics of the decisional nodes, which we have just discussed. Now, let's take a look at how a vector of specific features produces a specific prediction. The key here is to examine the decisions made along the path from the root to the leaf predictor node.

The decision making process inside a node is simple: take the path on the left if the function XI in the test vector X it is less than the dividing point, otherwise take the right path. To highlight the decision-making process, we need to highlight the comparison operation. For decision nodes along the path to the leaf predictor node, we show an orange wedge in place XI in the space of horizontal features. This makes the comparison easy to see; if the orange wedge is to the left of the black wedge, go left or go right. The decision nodes involved in the prediction process are surrounded by dashed lines and the child's edges are thicker and orange. Here are two sample trees showing the test vectors (click on the images to expand):

KNOWLEDGE data with test vectorDiabetes data with test vector

The test vector X with the names and values ​​of the functions, it is displayed below the leaf predictor node (or to the right in the left-to-right orientation). The test vector highlights the characteristics used in one or more decision nodes. When the number of functions reaches a threshold of 20 (10 for left-to-right orientation), the test vectors show no unused characteristics to avoid test vectors.

Orientation from left to right

Some users prefer the orientation from left to right rather than from top to bottom and sometimes the nature of the tree simply flows from left to right. The example functionality vectors can still be executed along the left-to-right orientation frame. Here are some examples (click on the images to enlarge them):

WineDiabetes
Wine showing a forecastDiabetes showing a forecast

Simplified unprocessed layout

Evaluate the generality of a decision tree, if often it helps to get a high-level overview of the tree. This generally means examining things like the shape and size of the trees, but above all, it means looking at the leaves. We would like to know how many samples each leaf has, how pure the target values ​​are and, in general, where most of the weight of the samples falls. Getting an overview is more difficult when the display is too large and therefore we provide a "non-fantasy" option that generates smaller views while retaining key information on the door. Here is an example classifier and a non-patterned regressor with a top-down orientation:

What we have tried and rejected

Those interested in these tree views from the design point of view may find it interesting to read what we have tried and rejected. Starting with the classifiers, we thought the histograms were a bit complex, and perhaps the kernel density estimates would provide a more accurate image. We had decision nodes that looked like this:

The problem is that decisional nodes with only one or two samples gave extremely misleading distributions:

We have also experimented with the use of bubble charts rather than histograms for the decision-making nodes of the classifier:

These seem really fantastic but, in the end, the histograms are easier to read.

Turning to regression trees, we took into account the use of box charts to show the distribution of prediction values ​​and also a simple bar graph to show the number of samples:

This double texture for each leaf is less satisfying than the weft of the strip we are using now. The box chart also does not show the distribution of target values ​​almost like a bar graph. Before the bar graph, we have just defined the target values ​​using the value of the example index as a horizontal axis:

This is misleading because the horizontal axis is usually the function space. We crumpled it up into a weft of strips.

Sample code

This section provides an example view for the Boston regression dataset and the wine classification data set. You can also check out the complete gallery of sample views and the code to generate samples.

View of the Boston regression tree

Here is a code snippet to load Boston data and train a regression tree with a maximum depth of three decision nodes:

boston = load_boston ()

X_train = boston.data
y_train = boston.target
testX = X_train[5,:]

regr = tree.DecisionTreeRegressor (max_depth = 3)
regr = regr.fit (X_train, y_train)

The code to display the tree involves the passage of the tree model, the training data, the names of characteristics and destination and a test vector (if desired):

viz = dtreeviz (regr, X_train, y_train, target_name = & # 39; price & # 39 ;,
feature_names = boston.feature_names,
X = testX)
viz.save ("boston.svg") # suffix determines the format of the generated image
viz.view () # pop-up to view the image

Visualization of the wine classification tree

Here is a code snippet to load Wine data and train a classifier tree with a maximum depth of three decision nodes:

clf = tree.DecisionTreeClassifier (max_depth = 3)
wine = load_wine ()

clf.fit (wine.data, wine.target)

Displaying a categorizer is equivalent to displaying a regressor, except that the view requires the names of the target classes:

viz = dtreeviz (clf, wine.data, wine.target, target_name = & # 39; wine & # 39 ;,
feature_names = wine.feature_names,
class_names = List (wine.target_names))
viz.view ()

In the Jupyter notebooks, the object is back from dtreeviz () has a _repr_svg_ () function used by Jupyter to display the object automatically. See the example notebook.

ERROR IN JUPYTER'S NOTEBOOK

The Jupyter notebooks at the moment, starting from September 2018, do not correctly display the SVG generated by this library. The characters etc … are all messed up:

The good news is that Github correctly displays them as JupyterLab does.

Use Image (viz.topng ()) to view (bad) in the Juypter notebook or simply to call viz.view (), which will open a window that shows things correctly.

Our implementation

This project was very frustrating with many programming deadings, manipulating parameters, circumventing bugs / limitations in tools and libraries, and creatively creating a set of existing tools. The only fun part was the (countless) sequence of experiments in visual design. We pushed because it seemed likely that the machine learning community would find this view as useful as us. This project represents about two months of arrangement through stackoverflow, documentation and horrible graphic programming.

At the highest level, we used matplotlib to generate images for decision and leaf nodes and combine them into a tree using the venerable graphviz. We have also extensively used HTML labels in the description of the graphviz tree for layout and character purposes. The only big headache was convincing all the components of our solution to produce high quality vector graphics.

Our first coding experiments led us to create a tree of shadows that surrounds decision trees created by scikit, so let's start with that.

Shade trees for decision trees of scikit

Decision-making trees for classifiers and regressors of scikit-learn are built for efficiency, not necessarily for the ease of walking on the tree or for extracting information about the node. We have created animl.trees.ShadowDecTree e animl.trees.ShadowDecTreeNode classes as an easy-to-use wrapper (traditional binary tree) for all tree information. Here's how to create a shadow tree from a skikit classifier or a regressor tree model:

shadow_tree = ShadowDecTree (tree_model, X_train, y_train, feature_names, class_names)

Shadow / node tree classes have many methods that could be useful for other libraries and tools that need to walk for scikit decision trees. For example, forecast () not only does it perform a vector of characteristics through the tree but it also returns the path of the nodes visited. I campioni associati a qualsiasi nodo particolare possono essere passati node_samples ().

Tool mashup

Il linguaggio Graphviz per la struttura ad albero di punti è molto utile per ottenere layout di albero decenti se si conoscono tutti i trucchi, come far sì che i bambini di sinistra appaiano a sinistra dei bambini di destra con i bordi dei grafici nascosti interconnessi. Ad esempio, se hai due foglie, leaf4 is leaf5, che deve apparire da sinistra a destra sullo stesso livello, ecco la magia di graphviz:

LSTAT3 -> leaf4 [penwidth=0.3 color=”#444443″ label=<>] LSTAT3 -> leaf5 [penwidth=0.3 color=”#444443″ label=<>] {
    rango = stesso;
    leaf4 -> leaf5 [style=invis] }

Solitamente utilizziamo etichette HTML su nodi graphviz piuttosto che solo etichette di testo perché danno molto più controllo sulla visualizzazione del testo e offrono la possibilità di mostrare i dati tabulari come tabelle effettive. Ad esempio, quando si visualizza un vettore di prova lungo l&#39;albero, il vettore di prova viene mostrato utilizzando una tabella HTML:

Per generare generare immagini da file graphviz, usiamo il graphviz pacchetto python, che finisce execing point eseguibile binario utilizzando una delle sue routine di utilità (correre()). Occasionalmente, abbiamo usato parametri leggermente diversi su point comando e quindi chiamiamo direttamente correre() così per la flessibilità:

cmd = [“dot”, “-Tpng”, “-o”, filename, dotfilename] stdout, stderr = run (cmd, capture_output = True, check = True, quiet = False)

Usiamo anche il correre() funzione per eseguire il pdf2svg (Strumento di conversione da PDF a SVG), come descritto nella prossima sezione.

Grafica vettoriale tramite SVG

Usiamo matplotlib per generare i nodi decision e leaf e, per ottenere le immagini in un&#39;immagine graphviz / dot, usiamo le etichette graphviz HTML e quindi facciamo riferimento alle immagini generate tramite img tag come questo:

Il numero 94806 è l&#39;ID del processo, che consente di isolare più istanze di animl in esecuzione sulla stessa macchina. Senza questo, è possibile che più processi sovrascrivano gli stessi file temporanei.

Poiché volevamo una grafica vettoriale scalabile, abbiamo provato a importare inizialmente le immagini SVG, ma non è stato possibile convincere Graphviz ad accettare tali file (né in pdf). Ci sono volute quattro ore per capire che la generazione e l&#39;importazione di SVG erano due cose diverse e avevamo bisogno del seguente magico incantesimo su OS X usando –with-librsvg:

$ brew install graphviz –with-librsvg –with-app –with-pango

Originariamente, quando abbiamo fatto ricorso alla generazione di file PNG da matplotlib, abbiamo impostato i punti per pollice (dpi) su 450 in modo che apparissero su schermi ad alta risoluzione come l&#39;iMac. Sfortunatamente, ciò significava che dovevamo specificare la dimensione effettiva che volevamo per l&#39;albero generale usando una tabella HTML in graphviz usando width is height parametri su

tag. Ciò causa un sacco di problemi perché abbiamo dovuto capire quali fossero le proporzioni che stavano uscendo da matplotlib. Una volta passati ai file SVG, abbiamo analizzato inutilmente i file SVG per ottenere la dimensione da utilizzare nell&#39;HTML; come abbiamo scritto questo documento ci siamo resi conto che l&#39;estrazione delle informazioni sulle dimensioni dai file SVG non era necessaria.

Sfortunatamente, l&#39;output SVG di graphviz faceva semplicemente riferimento ai file del nodo che abbiamo importato, piuttosto che incorporare le immagini del nodo all&#39;interno dell&#39;immagine dell&#39;albero generale. Questa è una forma molto scomoda perché inviare una visualizzazione ad albero singolo significa inviare un file zip invece di un singolo file. Abbiamo dedicato il tempo necessario per analizzare l&#39;XML SVG e incorporare tutte le immagini di riferimento all&#39;interno di un singolo file meta-SVG di grandi dimensioni. Ha funzionato alla grande e ci sono state molte celebrazioni.

Quindi abbiamo notato che graphviz non gestisce correttamente il testo nelle etichette HTML durante la generazione di SVG. Ad esempio, il testo delle legende dell&#39;albero classificatore è stato troncato e sovrapposto. Ratti.

Ciò che alla fine ha funzionato per ottenere un singolo file SVG pulito è stato prima la generazione di un file PDF da graphviz e quindi la conversione del PDF in SVG con pdf2svg (pdf2cairo sembra anche funzionare).

Poi abbiamo notato che il notebook Jupyter ha un bug in cui non mostra correttamente i file SVG (vedi sopra). Jupyter lab gestisce correttamente l&#39;SVG come fa Github. Abbiamo aggiunto un topng () metodo in modo che gli utenti del notebook Jupyter possano usarlo Immagine (viz.topng ()) per ottenere immagini in linea. Meglio ancora, chiama viz.view (), che aprirà una finestra che visualizza correttamente le immagini.

Lezioni imparate

A volte la risoluzione di un problema di programmazione riguarda meno gli algoritmi e più il lavoro all&#39;interno dei vincoli e delle capacità dell&#39;ecosistema di programmazione, come strumenti e librerie. Questo è sicuramente il caso con questo software di visualizzazione ad albero delle decisioni. La programmazione non è stata difficile; era più una questione di incolpare senza paura la nostra strada verso la vittoria attraverso un appropriato mashup di strumenti grafici e librerie.

La progettazione della visualizzazione effettiva richiedeva anche un numero apparentemente infinito di esperimenti e modifiche. La generazione di immagini vettoriali di alta qualità richiedeva anche una determinazione patologica e una scia di codice morto lasciato lungo il percorso tortuoso verso il successo.

Sicuramente non siamo degli appassionati di visualizzazione, ma per questo problema specifico ci siamo imbattuti in esso fino a quando non abbiamo ottenuto diagrammi efficaci. Nel seminario di Edward Tufte ho imparato che è possibile inserire un sacco di informazioni in un diagramma ricco, purché non sia un miscuglio arbitrario; l&#39;occhio umano può risolvere molti dettagli. Abbiamo utilizzato un numero di elementi dalla tavolozza del progetto per visualizzare alberi decisionali: colore, spessore della linea, stile della linea, diversi tipi di grafici, dimensioni (area, lunghezza, altezza del grafico, …), trasparenza del colore (alfa), stili di testo (colore, carattere, grassetto, corsivo, dimensioni), annotazioni grafiche e flusso visivo. Tutti gli elementi visivi dovevano essere motivati. Ad esempio, non abbiamo usato il colore solo perché i colori sono belli. Abbiamo usato il colore per evidenziare una dimensione importante (categoria target) perché gli esseri umani individuano rapidamente e facilmente le differenze di colore. Anche le differenze di dimensioni dei nodi dovrebbero essere facilmente individuate dagli umani. (is that a kitty cat or lion?), so we used that to indicate leaf size.

Future work

The visualizations described in this document are part of the animl machine learning library, which is just getting started. We&#39;ll likely moved the rfpimp permutation importance library into animl soon. At this point, we haven&#39;t tested the visualizations on anything but OS X. We&#39;d welcome instructions from programmers on other platforms so that we could include those installation steps in the documentation.

There are a couple of tweaks we like to do, such as bottom justifying the histograms and classifier trees so that it&#39;s easier to compare notes. Also, some of the wedge labels overlap with the axis labels. Finally, it would be interesting to see what the trees look like with incoming edge thicknesses proportional to the number of samples in that node.


Source link

9 comments

  1. “The England aces signature would be a huge boost to Liverpool, who have kept on leaders Chelseas heels, despite losing the influential Philippe Coutinho.” Conte also wants Morata as a potential replacement for Diego Costa if he tires to leave Stamford Bridge. “He has been at Monaco since 2014, having joined from the youth ranks at Rennes.” [url=https://www.soldeshuarache.fr/adidas%20duramo%20clog%20malaysia-ID2218822.html]adidas duramo clog malaysia[/url] Liverpool star Coutinho delivers the perfect free kick strike in win against Arsenal Reuters5Klopp a. Related StoriesMAGIC FEETJack Wilshere says Alexis Sanchez or Mesut Ozil arent Arsenals best player. surgery “:Press Association8Fabregas scored the third for Chelsea during their win over Arsenal on SaturdayCesc FabregasDespite playing a bit-part role under Conte this season, Cesc Fabregas actually leads the Blues in assists.” But Koeman disagrees about the clubs lack of ambition and reckons he will have a squad to trouble the top four inside 12 months. [url=http://www.paniez.be/adidas%20superstar%20nba%20shoes-ID29138.html]adidas superstar nba shoes[/url] “Harry Redknapp nurtured his development at West Ham, speaking enthusiastically of how the midfielder.” sap “Keep up to date with ALL the La Liga?news, gossip and transfers The group is allegedly made up primarily of Ukrainian neo-Nazis.” o Llorente scores late comeback winnerWhat is the latest Liverpool team newsJordan Henderson and Daniel Sturridge are both doubts for the visit of Burnley. preparation [url=http://www.shopfr.fr/t锚nis%20adidas%20yeezy%20boost%20350-ID13765.html]t锚nis adidas yeezy boost 350[/url] the club are being linked with some of the best rising stars in football during the transfer window.

    Reference: http://www.leonardograssi.it/sitemap.xml

  2. Cela ??tant dit, cette division est en place pour l’aper?u de grabs. FOR PLUS NOUVELLES WWE, SPOILERS \u0026 amp. Centre dominicaine Al Horford est en moyenne de 6,6 rebonds cette saison et dispose d’une ??quipe de pointe 53 blocs afin far. [url=https://www.soldeshuarache.fr/adidas%20tenue%20maroc-ID2349028.html]adidas tenue maroc[/url] Winner principale: John CENAFOR PLUS NOUVELLES WWE, SPOILERS \u0026 amp; MISES ?? JOUR, VISITLATINPOST. Dans son histoire, l’??quipe a une fiche de 3-9 dans le match 7 et est seulement 2-7 ?? domicile. adidas terrex homme Dans le tour de wild card, ils ont battu les Athletics d’Oakland dans un thriller 12-manche. PredictionNeither de flux de jeu sera ?? pleine puissance. [url=http://www.aaa7.fr/stan%20smith%20femme%20pas%20cher%20amazon-ID45504.html]stan smith femme pas cher amazon[/url] Les Packers ont battu les Cowboys de Dallas dans la ronde de la division NFC au Lambeau Field. adidas crazy boost Entrez Seattle Seahawks ont SeahawksThe pas jou?? tout ?? fait ?? la hauteur, mais ils sont encore tr??s dans la course aux s??ries ??liminatoires. Sinc??rement, je ne m’y attendais pas. chaussure gazelle enfant [url=http://www.verasoie.fr/adidas%20superstar%20cf1-ID18544.html]adidas superstar cf1[/url] Selon le juge, quelqu’un pourrait ??tre choisi pour faire partie du jury, m??me si elles ont d??j?? entendu parler de l’affaire.

    Référence: http://www.biotrading.it/rss.xml

  3. Un combat avec Vargas ne serait pas faire beaucoup pour les num??ros de Pacquiao. Partager cet article sur Facebook Like Us Sanchez a toujours ??t?? le chiffre d’affaires sujettes au long de sa carri??re. Finally, Jim Ross a un nouveau concert dans la lutte, l’ancien commentateur de la WWE se joindre aux Lutte de travail mondial (GWF) de faire des commentaires pour leur janvier 4 pr??sentation de New Japan Pro Wrestling (NJPW) WrestleKingdom 9. [url=https://www.soldeshuarache.fr/adidas%20paris%2015-ID2285813.html]adidas paris 15[/url] Les Seahawks ‘L??gion d’Boom’ la d??fense a permis ?? plus de 130 verges au sol les deux derniers matchs, ?? les Packers de Green Bay et les Panthers de la Caroline. FOR PLUS DE PRESSE AIR JORDAN DATES DES MISES ?? JOUR, cliquez sur ce lien pour visiter LATIN POST. asos stan smith It sera difficile, mais les Seahawks doivent tenir ?? leur identit?? offensive. Partager cet article sur Facebook Like Us Nouvelles connexes Lakers Nouvelles et rumeurs: Kobe Bryant nie Convaincre Rajon Rondo de d??m??nager ?? Los Angeles Suivant SeasonNBA Free Agents rumeurs: Chicago Bulls veulent garder Jimmy Butler; Ils sont dispos??s ?? offrir Max contrat? [url=http://www.paniez.be/adidas%20superstar%20og%2080-ID30307.html]adidas superstar og 80[/url] Tout de tous, Suarez eu un d??but difficile ?? sa carri??re ?? Barcelone. chaussure adidas original homme Je vais essayer de n??gocier ‘, a d??clar?? ?? Globo Costa Esporte. Il ??tait difficile pour Orlando ?? contenir ses ??motions lors de la projection de New York City comme il a discut?? du film avec le public et regarda retour ?? ces moments difficiles. algerie adidas [url=http://www.verasoie.fr/zapatillas%20adidas%20superstar%20gold-ID59479.html]zapatillas adidas superstar gold[/url] Isco joue d??j?? dans le milieu de terrain et pourrait se trouver encore de l’avant avec Illarramendi dans le pli.

    Référence: http://sundaybestquiltworks.com/rss.xml

  4. 1 interceptions apr??s la All-Star break. Joakim Bulls Noah a rat?? les quatre derniers matchs en raison d’une cheville droite injury. ? Elle a poursuivi en disant les textes ne sugg??rent aucune hostilit?? et que les procureurs ne r??pond pas ?? la charge de prouver Lloyd a estim?? qu’il ??tait ?? un ?imminent la mort. [url=http://www.frsolde.fr/adidas%20boost%20nmd%20r2-ID670.html]adidas boost nmd r2[/url] Personne ne trouve ?? la fin de la zone comme un taux ??lev?? de Bryant fait. Mis ?? part la fa?on dont la superstar du Real Madrid a un impact sur le patch, le manager de Liverpool, de longue date admire ??galement la persistance de Ronaldo, en disant que Ronaldo a travaill?? dur pour atteindre son ??tat actuel, malgr?? l’exp??rience de nombreux revers. yeezy boost femme Last lundi a vu les d??buts officiels de Kalisto et Neville . Trading pour Arian Foster pourrait ??tre b??n??fique ?? la fois pour les Texans de Houston et Chargers. [url=http://www.soldeweb.fr/adidas%20yeezy%20boost%20shop-ID9591.html]adidas yeezy boost shop[/url] ?Je ne pensais jamais que je serais jamais retourner ?? bord du ring pour un ??v??nement de catch mais cette occasion ??tait tout simplement trop sp??cial de baisser. adidas homme montante Le 19-year-old centre dit que jouer dans la NBA a ??t?? un r??ve depuis qu’il avait six ans et il estime qu’il est maintenant le moment id??al pour faire un saut au grand entra?neur Mike Krzyzewski league. En 1998, il ??tait Zinedine Zidane. chaussure intervention adidas [url=http://www.tgrob.be/adidas%20superstar%20silver%20femme-ID38340.html]adidas superstar silver femme[/url] Regardez ce que ils l’ont fait dimanche pour Romo contre les Giants.

    Référence: http://aaronkentjr.com/ror.xml

  5. Nous pouvons m??me voir les Cowboys franchise tag Destinations Bryant. En fait, le deuxi??me plus grand passes par match de l’??quipe ont ??t?? faites par Dani Alves et son pour cent de passage est de seulement 0,4 point de plus que de Rakitic. Paraguay Am??rique du Sud (comme le P??rou) a remport?? deux titres en Copa America en 1953 et 1979. [url=http://www.frsolde.fr/adidas%20nmd%20couleur-ID1474.html]adidas nmd couleur[/url] Wilson a jet?? seulement 26 interceptions en trois saisons, et il n’a pas manqu?? un seul match de sa jeune carri??re. Spielman a ??galement dit Sid Hartman de Star Tribune qu’il n’y avait pas question qu’ils voulaient Peterson pour revenir, jouer la sp??culation qu’ils vont se s??parer apr??s la controverse derni??re season. adidas femme grise Les Patriots ramass?? une grande victoire sur les chargeurs ?? la semaine 14 pour am??liorer ?? 10-3 globale, mais les Broncos ne reculerons pas. Si Illarra pr??sente sa valeur, l’??quipe peut avoir besoin de se d??placer l’un de ses autres milieux de terrain exc??dentaires (Modric ou Khedira par exemple) pour l’adapter ?? plus coh??rente. [url=http://www.soldeweb.fr/adidas%20yeezy%20boost%20350%20v2-ID125.html]adidas yeezy boost 350 v2[/url] Deuxi??mement, les Dodgers ??taient ?? la maison; ce fut leur maison ouverture ??liminatoires game. superstar femme grise Tannehill a ??t?? incoh??rent ?? certains moments, mais il montre des signes d’am??lioration. These deux ??quipes ont une fiche combin??e 16-2 pr??sentes s??ries. justaucorps adidas [url=http://www.tgrob.be/adidas%20superstar%202%20blue-ID11945.html]adidas superstar 2 blue[/url] Trout est de neuf ans de moins que Cabrera, et il est tout aussi bon d??fenseur.

    Référence: http://advancemills.org/rss.xml

  6. veulent augmenter dans les prochaines ann??es, ‘Maritza Huerta, directeur adjoint pour les m??dias BLJ Worldwide, a d??clar??. intestin qui deviennent enflamm??s ou infect??s ‘Nous avons pas vu une star comme lui depuis qu’il a quitt???, a d??clar?? Ariel Helwani, animateur du podcast ‘MMA Hour’, lors d’une interview avec Paul Heyman – qui a aid?? conseiller Lesnar au long de sa MMA carri??re ainsi que la lutte professionnelle. Battre les Colts d’Indianapolis est bon et tout, mais ce ne sera pas vous pr??parer pour une ??quipe comme les Seahawks. [url=https://www.soldeshuarache.fr/adidas%20usine%20landersheim-ID2366929.html]adidas usine landersheim[/url] After que de but, l’??quipe a r??ussi ?? trouver encore un autre engin et avait une pl??thore de chances d’??tendre les devants avant la mi-temps. Bryant a ses d??tracteurs, mais lui et la plupart des fans des Lakers savent que la liste ne soit pas enti??rement charg??, en particulier dans la cour avant. superstar adidas 37 Poster latine jette un regard sur les gardiens et les d??fenseurs de chaque c?t?? dans ce lien: Bayern Munich – Manuel Neuer, Bernat, Jerome Boateng, Rafinha, Dante, Holger BadstuberIt sera int??ressant de voir si Guardiola opte pour Babstuder ou Dante dans ce lien. Je choisis quarterback comme la position, peut-??tre une erreur, mais aucun joueur de worries. [url=http://www.basketwebs.fr/basket%20adidas%20samba%20pas%20cher-ID17120.html]basket adidas samba pas cher[/url] That dit, l’affirmation ne r??siste beaucoup de v??rit?? dans ce que Mascherano a jou?? un r?le cl?? dans la mont??e de l’??quipe dans le tournoi. adidas canada Gardien de but Hope Solo a ??t?? dans les nouvelles r??cemment pour les mauvaises raisons, mais elle a obtenu un blanchissage dans son dernier match contre le Guatemala dans une victoire de 5-0. ?Je savais que Tom ??tait celui d’embl??e,’ at-elle dit Vogue UKfor le num??ro de Mars 2015 du magazine. adidas star [url=http://www.verasoie.fr/vente%20adidas%20superstar%20pas%20cher-ID48535.html]vente adidas superstar pas cher[/url] Meneur Jose Calderon doit encore jouer un seul match cette saison en raison d’une blessure au mollet.

    Référence: http://kidzieland.com/rss.xml

  7. Proc??d?? Tigers ont remport?? la division des quatre derni??res saisons cons??cutives, y compris une apparence World Series en 2012. Les Dolphins face aux Bills de Buffalo la semaine prochaine ?? la maison dans Miami. Durant a subi une fracture au pied en Octobre, et il a jou?? un seul match cette saison. [url=http://www.frsolde.fr/adidas%20nmd%20r1%20ice%20purple-ID2651.html]adidas nmd r1 ice purple[/url] Age Avant BeautyMayweather a l’avantage incontestable en taille, mais Pacquiao a un avantage dans l’age. On l’ann??e, Messi fait Ronaldo regarder ordinaire ainsi, si une partie de qui est propre futilit?? de Ronaldo dans les derniers mois . crampon taille 26 Wilson est pr??vu de faire moins de 800. Monterrey descendit 1-0 dans la premi??re jambe en jouant ?? la maison, mais est revenu en force dans la deuxi??me manche, en battant 2-0 l’Atlas. [url=http://www.soldeweb.fr/yeezy%20boost%20350%20bebe-ID18836.html]yeezy boost 350 bebe[/url] Le Rhein Haus sera servir un brunch jusqu’?? 14:00 Les portes ouvrent ?? 11h30 Apr??s le brunch, le prix de l’happy hour coup avec $ 4 bi??res et $ 8 hamburgers sur rouleaux de bretzel faites dans la maison. survetement adidas bleu La plupart des joueurs de superstar donnent les fans au moins un concours de dunk ?? la hate, comme Kobe Bryant, Michael Jordan, Dwight Howard et m??me Blake Griffin. Randall Cobb a ??galement une ??quipe de pointe neuf touch??s pour les Packers cette ann??e. stan smith scratch bleu [url=http://www.tgrob.be/basket%20superstar%20or-ID47608.html]basket superstar or[/url] Seuls les Maple Leafs de Toronto et les Oilers d’Edmonton ont rat?? les s??ries ??liminatoires de teams.

    Référence: http://ilvinaio.net/rss.xml

  8. Mike IupatiMike Iupati est l’un des meilleurs int??rieurs ligne offensive de la ligue aujourd’hui. ??tonnamment, les Knicks se classent au 25e rang dans la notation sur 30 ??quipes. The footballeurs professionnels de Chelsea a choisi le 24-year-old belge que le r??cipiendaire du prix de cette ann??e ?? la suite de ce qui a ??t?? une ann??e inoubliable pour Hazard. [url=https://www.soldeshuarache.fr/custom%20adidas%20clothing-ID2403445.html]custom adidas clothing[/url] La seule fa?on de les Cowboys pourrait obtenir la graine n ?? 1 est de savoir si les Packers et les Lions lien avec l’autre, et l’Cowboys win. partager cette histoire Aimez-nous sur Facebook (Photo: Getty) Les Knicks ont ??t?? juste un pick loin d’obtenir Stephen Curry. adidas tubular defiant Ancien quart Florida State Seminoles Jameis Winston a ??t?? le choix pr??f??r?? par la plupart des experts, des projets de supporters et les scouts, mais de r??centes rumeurs ont ??t?? tourbillonnant dans un direction. Pr??sident WBC Mauricio Sulaiman a r??cemment d??clar?? que Cotto devrait d??fendre son titre contre Golovkine, mais Sanchez a conc??d?? qu’ils ne peuvent pas forcer les 34 ans pour faire face ?? GGG, qui ??tait la raison pour laquelle ils se sont install??s pour la prochaine meilleure option. [url=http://www.paniez.be/adidas%20superstar%20olive-ID30359.html]adidas superstar olive[/url] Partager cet article sur Facebook Like Us Barcelone aura pas facile ce week-end contre Valence, qui a permis aux deuxi??me moins de buts dans la ligue grace au gardien Diego Alves. adidas original superstar femme ?La saison 2014-15 de la NBA est touche ?? sa fin, et les fans ont une course plus serr??e dans la Conf??rence Ouest que jamais imaginables. Mais selon le lac montrer la vie, les Lakers veulent pas lui en raison de son contrat expirant, mais en raison de sa capacit?? ?? aider l’??quipe sur les deux extr??mit??s de la chauss??e. bandouli??re adidas [url=http://www.verasoie.fr/adidas%20superstar%20uk%20size%205-ID37578.html]adidas superstar uk size 5[/url] Centre Jahlil Okafor pourrait tr??s bien aller pour ??tre le n ?? 1 choix au total dans la draft NBA 2015.

    Référence: http://lakeviewdentistry.net/rss.xml

  9. Dans son Hall of Fame carri??re, Johnson a frapp?? 190 frappeurs, le plus dans l’??re moderne. Seulement Kentucky, Wisconsin, Michigan State et Duke restent. Cette ??quipe Red Sox orchestre d’une mani??re tout ?? fait unique. [url=http://www.frsolde.fr/adidas%20nmd%20femme%20rose%20pas%20cher-ID1622.html]adidas nmd femme rose pas cher[/url] According ?? risque au Chicago Sun-Times, Johnson et son partenaire Kimbra Walter ont affect?? le don pour ‘ Un programme d’??t?? de Chicago Plus ‘sur une p??riode de deux ans. COM SPORTS PAGE?Real Madrid a poursuivi sa forme scintillante par embarrasser Liverpool ?? Anfield ??tait Wednesday. crampons de foot adidas Les champions d’??quipe d’??tiquette pourraient savoir si elles font face ?? une ??quipe ?? ‘WrestleMania’ mais peut-??tre plus d’??quipes seront dans l’image. Proc??d?? prix n’a pas seulement honorer les athl??tes professionnels et coaches. [url=http://www.soldeweb.fr/adidas%20yeezy%20powerphase%20noir-ID11337.html]adidas yeezy powerphase noir[/url] Hamilton n’a pas disparu dans beaucoup de d??tails sur son nouveau livre, mais elle a r??cemment d??clar?? sur son site Web que ?d??pression? a jou?? un r?le dans ses actions. adidas stan smith w Danny Woodhead va manquer toute la saison et Ryan Mathews est toujours pos??e une sprain. Plus de 3 pieds avaient entass??s, leader maire Marty Walsh pour d??finir l’??v??nement pour donner aujourd’hui ??quipages de temps pour creuser d’une temp??te lundi qui a plus d’un pied de plus maire white. adidas creteil soleil [url=http://www.tgrob.be/adidas%20superstar%20wei??%20gold%20herren-ID74285.html]adidas superstar wei?? gold herren[/url] Les deux ??quipes ont une d??fense solide, mais l’Argentine devraient pouvoir allumer la lampe au moins twice.

    Référence: http://www.lamoressa.it/sitemap.xml

Leave a Reply

Your email address will not be published.