2013-04-01 2 views
12

Questa è una domanda generale sulle procedure relative al text mining. Supponiamo che uno abbia un corpus di documenti classificati come spam/no_spam. Come procedura standard, è possibile pre-elaborare i dati, rimuovere la punteggiatura, interrompere le parole, ecc. Dopo averlo convertito in DocumentTermMatrix, è possibile creare alcuni modelli per prevedere lo spam/No_Spam. Ecco il mio problema. Ora voglio usare il modello creato per l'arrivo di nuovi documenti. Per verificare un singolo documento, dovrei creare un DocumentTerm * Vector *? quindi può essere usato per prevedere Spam/No_Spam. Nella documentazione di tm ho trovato uno converte il Corpus completo in una matrice usando per esempio i pesi di tfidf. Come posso quindi convertire un singolo vettore usando l'idf dal Corpus? devo cambiare il mio corpus e costruire una nuova DocumentTermMatrix ogni volta? Ho elaborato il mio corpus, l'ho convertito in una matrice e poi lo ho diviso in un set di Training and Testing. Ma qui il set di test è stato costruito sulla stessa riga della matrice di documenti dell'intero set. Posso controllare la precisione ecc., Ma non so quale sia la procedura migliore per la nuova classificazione del testo.Pacchetto R tm utilizzato per l'analisi predittiva. Come si classifica un nuovo documento?

Ben, immagino di avere una DocumentText Matrix preelaborata, la converto in un data.frame.

dtm <- DocumentTermMatrix(CorpusProc,control = list(weighting =function(x) weightTfIdf(x, normalize =FALSE),stopwords = TRUE, wordLengths=c(3, Inf), bounds = list(global = c(4,Inf)))) 

dtmDataFrame <- as.data.frame(inspect(dtm)) 

Aggiunto un fattore variabile e costruito un modello.

Corpus.svm<-svm(Risk_Category~.,data=dtmDataFrame) 

Ora immaginate Vi do un nuovo documento d (non ero in Corpus prima) e si desidera conoscere il modello di previsione spam/No_Spam. Come farlo?

Ok, consente di creare un esempio basato sul codice utilizzato qui.

examp1 <- "When discussing performance with colleagues, teaching, sending a bug report or searching for guidance on mailing lists and here on SO, a reproducible example is often asked and always helpful. What are your tips for creating an excellent example? How do you paste data structures from r in a text format? What other information should you include? Are there other tricks in addition to using dput(), dump() or structure()? When should you include library() or require() statements? Which reserved words should one avoid, in addition to c, df, data, etc? How does one make a great r reproducible example?" 
examp2 <- "Sometimes the problem really isn't reproducible with a smaller piece of data, no matter how hard you try, and doesn't happen with synthetic data (although it's useful to show how you produced synthetic data sets that did not reproduce the problem, because it rules out some hypotheses). Posting the data to the web somewhere and providing a URL may be necessary. If the data can't be released to the public at large but could be shared at all, then you may be able to offer to e-mail it to interested parties (although this will cut down the number of people who will bother to work on it). I haven't actually seen this done, because people who can't release their data are sensitive about releasing it any form, but it would seem plausible that in some cases one could still post data if it were sufficiently anonymized/scrambled/corrupted slightly in some way. If you can't do either of these then you probably need to hire a consultant to solve your problem" 
examp3 <- "You are most likely to get good help with your R problem if you provide a reproducible example. A reproducible example allows someone else to recreate your problem by just copying and pasting R code. There are four things you need to include to make your example reproducible: required packages, data, code, and a description of your R environment. Packages should be loaded at the top of the script, so it's easy to see which ones the example needs. The easiest way to include data in an email is to use dput() to generate the R code to recreate it. For example, to recreate the mtcars dataset in R, I'd perform the following steps: Run dput(mtcars) in R Copy the output In my reproducible script, type mtcars <- then paste. Spend a little bit of time ensuring that your code is easy for others to read: make sure you've used spaces and your variable names are concise, but informative, use comments to indicate where your problem lies, do your best to remove everything that is not related to the problem. The shorter your code is, the easier it is to understand. Include the output of sessionInfo() as a comment. This summarises your R environment and makes it easy to check if you're using an out-of-date package. You can check you have actually made a reproducible example by starting up a fresh R session and pasting your script in. Before putting all of your code in an email, consider putting it on http://gist.github.com/. It will give your code nice syntax highlighting, and you don't have to worry about anything getting mangled by the email system." 
examp4 <- "Do your homework before posting: If it is clear that you have done basic background research, you are far more likely to get an informative response. See also Further Resources further down this page. Do help.search(keyword) and apropos(keyword) with different keywords (type this at the R prompt). Do RSiteSearch(keyword) with different keywords (at the R prompt) to search R functions, contributed packages and R-Help postings. See ?RSiteSearch for further options and to restrict searches. Read the online help for relevant functions (type ?functionname, e.g., ?prod, at the R prompt) If something seems to have changed in R, look in the latest NEWS file on CRAN for information about it. Search the R-faq and the R-windows-faq if it might be relevant (http://cran.r-project.org/faqs.html) Read at least the relevant section in An Introduction to R If the function is from a package accompanying a book, e.g., the MASS package, consult the book before posting. The R Wiki has a section on finding functions and documentation" 
examp5 <- "Before asking a technical question by e-mail, or in a newsgroup, or on a website chat board, do the following: Try to find an answer by searching the archives of the forum you plan to post to. Try to find an answer by searching the Web. Try to find an answer by reading the manual. Try to find an answer by reading a FAQ. Try to find an answer by inspection or experimentation. Try to find an answer by asking a skilled friend. If you're a programmer, try to find an answer by reading the source code. When you ask your question, display the fact that you have done these things first; this will help establish that you're not being a lazy sponge and wasting people's time. Better yet, display what you have learned from doing these things. We like answering questions for people who have demonstrated they can learn from the answers. Use tactics like doing a Google search on the text of whatever error message you get (searching Google groups as well as Web pages). This might well take you straight to fix documentation or a mailing list thread answering your question. Even if it doesn't, saying “I googled on the following phrase but didn't get anything that looked promising” is a good thing to do in e-mail or news postings requesting help, if only because it records what searches won't help. It will also help to direct other people with similar problems to your thread by linking the search terms to what will hopefully be your problem and resolution thread. Take your time. Do not expect to be able to solve a complicated problem with a few seconds of Googling. Read and understand the FAQs, sit back, relax and give the problem some thought before approaching experts. Trust us, they will be able to tell from your questions how much reading and thinking you did, and will be more willing to help if you come prepared. Don't instantly fire your whole arsenal of questions just because your first search turned up no answers (or too many). Prepare your question. Think it through. Hasty-sounding questions get hasty answers, or none at all. The more you do to demonstrate that having put thought and effort into solving your problem before seeking help, the more likely you are to actually get help. Beware of asking the wrong question. If you ask one that is based on faulty assumptions, J. Random Hacker is quite likely to reply with a uselessly literal answer while thinking Stupid question..., and hoping the experience of getting what you asked for rather than what you needed will teach you a lesson." 



corpus2 <- Corpus(VectorSource(c(examp1, examp2, examp3, examp4))) 

Nota Presi esempio 5

skipWords <- function(x) removeWords(x, stopwords("english")) 
funcs <- list(tolower, removePunctuation, removeNumbers, stripWhitespace, skipWords) 
corpus2.proc <- tm_map(corpus2, FUN = tm_reduce, tmFuns = funcs) 

corpus2a.dtm <- DocumentTermMatrix(corpus2.proc, control = list(wordLengths = c(3,10))) 
dtmDataFrame <- as.data.frame(inspect(corpus2a.dtm)) 

Aggiunto un fattore variabile Spam_Classification 2 livelli spam/No_Spam

dtmFinal<-cbind(dtmDataFrame,Spam_Classification) 

ho costruire un modello SVM Corpus.svm < -svm (Spam_Category ~., Data = dtmFinal)

Ora immagino che h esempio 5 come nuovo documento (email) Come faccio a generare un valore Spam/No_Spam ???

+0

Si prega di aggiornare la tua domanda di includere il codice che si sta utilizzando, alcuni dati di esempio in modo da poter riprodurre i vostri metodi, e di un esempio dell'output che desideri. Con queste informazioni extra è più probabile che tu ottenga risposte più utili. – Ben

+1

Ben, è una domanda molto generale che non abbiamo bisogno di codice, penso. In ogni caso. Immagina di avere una DocumentText Matrix preelaborata, la converto in un data.frame. dtm <- DocumentTermMatrix (CorpusProc, control = lista (weighting = function (x) weightTfIdf (x, normalize = FALSE), stopwords = TRUE, wordLengths = c (3, Inf), bounds = list (global = c (4, Inf)))) –

risposta

0

Non è chiaro quale sia la tua domanda o che tipo di risposta stai cercando.

Supponendo che tu stia chiedendo 'come posso ottenere un' DocumentTermVector 'per passare ad altre funzioni?', Ecco un metodo.

Alcuni dati riproducibili:

examp1 <- "When discussing performance with colleagues, teaching, sending a bug report or searching for guidance on mailing lists and here on SO, a reproducible example is often asked and always helpful. What are your tips for creating an excellent example? How do you paste data structures from r in a text format? What other information should you include? Are there other tricks in addition to using dput(), dump() or structure()? When should you include library() or require() statements? Which reserved words should one avoid, in addition to c, df, data, etc? How does one make a great r reproducible example?" 
examp2 <- "Sometimes the problem really isn't reproducible with a smaller piece of data, no matter how hard you try, and doesn't happen with synthetic data (although it's useful to show how you produced synthetic data sets that did not reproduce the problem, because it rules out some hypotheses). Posting the data to the web somewhere and providing a URL may be necessary. If the data can't be released to the public at large but could be shared at all, then you may be able to offer to e-mail it to interested parties (although this will cut down the number of people who will bother to work on it). I haven't actually seen this done, because people who can't release their data are sensitive about releasing it any form, but it would seem plausible that in some cases one could still post data if it were sufficiently anonymized/scrambled/corrupted slightly in some way. If you can't do either of these then you probably need to hire a consultant to solve your problem" 
examp3 <- "You are most likely to get good help with your R problem if you provide a reproducible example. A reproducible example allows someone else to recreate your problem by just copying and pasting R code. There are four things you need to include to make your example reproducible: required packages, data, code, and a description of your R environment. Packages should be loaded at the top of the script, so it's easy to see which ones the example needs. The easiest way to include data in an email is to use dput() to generate the R code to recreate it. For example, to recreate the mtcars dataset in R, I'd perform the following steps: Run dput(mtcars) in R Copy the output In my reproducible script, type mtcars <- then paste. Spend a little bit of time ensuring that your code is easy for others to read: make sure you've used spaces and your variable names are concise, but informative, use comments to indicate where your problem lies, do your best to remove everything that is not related to the problem. The shorter your code is, the easier it is to understand. Include the output of sessionInfo() as a comment. This summarises your R environment and makes it easy to check if you're using an out-of-date package. You can check you have actually made a reproducible example by starting up a fresh R session and pasting your script in. Before putting all of your code in an email, consider putting it on http://gist.github.com/. It will give your code nice syntax highlighting, and you don't have to worry about anything getting mangled by the email system." 
examp4 <- "Do your homework before posting: If it is clear that you have done basic background research, you are far more likely to get an informative response. See also Further Resources further down this page. Do help.search(keyword) and apropos(keyword) with different keywords (type this at the R prompt). Do RSiteSearch(keyword) with different keywords (at the R prompt) to search R functions, contributed packages and R-Help postings. See ?RSiteSearch for further options and to restrict searches. Read the online help for relevant functions (type ?functionname, e.g., ?prod, at the R prompt) If something seems to have changed in R, look in the latest NEWS file on CRAN for information about it. Search the R-faq and the R-windows-faq if it might be relevant (http://cran.r-project.org/faqs.html) Read at least the relevant section in An Introduction to R If the function is from a package accompanying a book, e.g., the MASS package, consult the book before posting. The R Wiki has a section on finding functions and documentation" 
examp5 <- "Before asking a technical question by e-mail, or in a newsgroup, or on a website chat board, do the following: Try to find an answer by searching the archives of the forum you plan to post to. Try to find an answer by searching the Web. Try to find an answer by reading the manual. Try to find an answer by reading a FAQ. Try to find an answer by inspection or experimentation. Try to find an answer by asking a skilled friend. If you're a programmer, try to find an answer by reading the source code. When you ask your question, display the fact that you have done these things first; this will help establish that you're not being a lazy sponge and wasting people's time. Better yet, display what you have learned from doing these things. We like answering questions for people who have demonstrated they can learn from the answers. Use tactics like doing a Google search on the text of whatever error message you get (searching Google groups as well as Web pages). This might well take you straight to fix documentation or a mailing list thread answering your question. Even if it doesn't, saying “I googled on the following phrase but didn't get anything that looked promising” is a good thing to do in e-mail or news postings requesting help, if only because it records what searches won't help. It will also help to direct other people with similar problems to your thread by linking the search terms to what will hopefully be your problem and resolution thread. Take your time. Do not expect to be able to solve a complicated problem with a few seconds of Googling. Read and understand the FAQs, sit back, relax and give the problem some thought before approaching experts. Trust us, they will be able to tell from your questions how much reading and thinking you did, and will be more willing to help if you come prepared. Don't instantly fire your whole arsenal of questions just because your first search turned up no answers (or too many). Prepare your question. Think it through. Hasty-sounding questions get hasty answers, or none at all. The more you do to demonstrate that having put thought and effort into solving your problem before seeking help, the more likely you are to actually get help. Beware of asking the wrong question. If you ask one that is based on faulty assumptions, J. Random Hacker is quite likely to reply with a uselessly literal answer while thinking Stupid question..., and hoping the experience of getting what you asked for rather than what you needed will teach you a lesson." 

Creare un corpus da questi testi:

corpus2 <- Corpus(VectorSource(c(examp1, examp2, examp3, examp4, examp5))) 

testo processo:

skipWords <- function(x) removeWords(x, stopwords("english")) 
funcs <- list(tolower, removePunctuation, removeNumbers, stripWhitespace, skipWords) 
corpus2.proc <- tm_map(corpus2, FUN = tm_reduce, tmFuns = funcs) 

Converti corpora elaborati alla matrice termine documento:

corpus2a.dtm <- DocumentTermMatrix(corpus2.proc, control = list(wordLengths = c(3,10))) 
inspect(corpus2a.dtm) 

A document-term matrix (5 documents, 273 terms) 

Non-/sparse entries: 314/1051 
Sparsity   : 77% 
Maximal term length: 10 
Weighting   : term frequency (tf) 

    Terms 
Docs able actually addition allows answer answering answers archives are arsenal avoid background based 
    1 0  0  2  0  0   0  0  0 1  0  1   0  0 
    2 1  1  0  0  0   0  0  0 0  0  0   0  0 
    3 0  1  0  1  0   0  0  0 0  0  0   0  0 
    4 0  0  0  0  0   0  0  0 0  0  0   1  0 
    5 2  1  0  0  8   2  3  1 0  1  0   0  1 

Questa è la linea di chiave che si ottiene il "DocumentTerm * vettore *" che si riferiscono a:

# access vector of first document in the dtm 
as.matrix(corpus2a.dtm)[1,] 

able actually addition  allows  answer answering answers archives  are 
     0   0   2   0   0   0   0   0   1 
    arsenal  avoid background  based  basic  before  better  beware  bit 
     0   1   0   0   0   0   0   0   0 
    board  book  bother  bug changed  chat  check  

Infatti è un numero di nome, che dovrebbe essere utile per passare ad altre funzioni , ecc, che sembra simile a quello che si vuole fare:

str(as.matrix(corpus2a.dtm)[1,]) 
Named num [1:273] 0 0 2 0 0 0 0 0 1 0 ... 

Se si desidera solo un vettore numerico, provare as.numeric(as.matrix(corpus2a.dtm)[1,]))

È questo che vuoi fare?

+1

Non esattamente. Mi dispiace che non sia così chiaro. Ho già fatto tutti questi passaggi. Immaginate con la matrice creata per addestrare un modello (svm per esempio) usando una variabile categoriale spam/No_spam. Quindi si desidera utilizzare il modello quando arrivano nuove e-mail. La domanda è che le nuove email non appartengono al tuo corpus. Quando vuoi una previsione di spam/No_spam devi convertirla in una matrice raw e inviarla al modello. Ecco quando sto avendo problemi. Nuovi documenti da classificare. –

+0

Se ho capito bene, è necessario elaborare la nuova email (come la riga 'tm_map' sopra) quindi aggiungerla a DocumentTermMatrix, quindi convertire il DTM in una matrice, quindi eseguire il modello su di essa. Puoi semplicemente usare 'c' per aggiungere un nuovo documento a un DTM esistente o puoi aggiornare un documento esistente nel DTM con' Content (myCorpus [[10]]) <- "hey I am the new content of this document" "Questo ti aiuta? – Ben

+2

Beh, preferirei non cambiare il corpus ogni volta che arriva una nuova email, cambierà tutta la matrice basata sui numeri di tfidf. Certamente dovrei costruire un nuovo SVM ogni volta. Ecco il problema Vorrei inserire la nuova email, eseguire la pre-elaborazione e creare un vettore a 1 riga con le stesse colonne della matrice, prendendo il tf dal nuovo documento e l'idf dal Corpus. E usalo per prevedere lo spam/No_Spam. Quello che non so se c'è una procedura standard qui, o funzioni per realizzarlo, o deve essere codificato. –

1

Ho lo stesso problema e penso che il pacchetto RTextTools possa aiutarti.

Guardate create_matrix:

... 
originalMatrix - The original DocumentTermMatrix used to train the models. If supplied, will 
adjust the new matrix to work with saved models. 
...

Così in codice:

train.data <- loadDataTable() # load data from DB - 3 columns (info, subject, category) 
train.matrix <- create_matrix(train.data[, c(subject, info)]), language="english", removeNumbers=TRUE, stemWords=FALSE, weighting=weightTfIdf) 
train.container <- create_container(train.matrix,train.data$category,trainSize=1:nrow(train.data), virgin=FALSE) 
model <- train_model(train.container, algorithm=c("SVM")) 
# save model & matrix 

predict.text <- function(info, subject, train.matrix, model) 
{ 
    predict.matrix <- create_matrix(cbind(subject = subject, info = info), originalMatrix = train.matrix, language="english", removeNumbers=TRUE, stemWords=FALSE, weighting=weightTfIdf) 
    predict.container <- create_container(predict.matrix, NULL, testSize = 1, virgin = FALSE) # testSize = 1 - we have only one row! 
    return(classify_model(predict.container, model)) 
} 
1

Grazie per questa domanda interessante. Ci ho pensato sopra per un po 'di tempo. Troppo corto, la quintessenza delle mie scoperte: per i metodi di pesatura, eccetto che non c'è modo di aggirare il laborioso lavoro o di ricalcolare l'intero DTM (e probabilmente rieseguire il tuo svm).

Solo per la pesatura di tf ho trovato un processo facile per la classificazione di nuovi contenuti. Devi trasformare il nuovo documento (di sicuro) in un DTM. Durante la trasformazione devi aggiungere un dictionary contenente tutti i termini che hai usato per addestrare il tuo classificatore sul vecchio corpus. Quindi puoi usare predict() come al solito. Per la parte tf, ecco un campione molto minimale e un metodo per classificare un nuovo documento:

### I) Data 

texts <- c("foo bar spam", 
      "bar baz ham", 
      "baz qux spam", 
      "qux quux ham") 

categories <- c("Spam", "Ham", "Spam", "Ham") 

new <- "quux quuux ham" 

### II) Building Model on Existing Documents „texts“ 

library(tm) # text mining package for R 
library(e1071) # package with various machine-learning libraries 

## creating DTM for texts 
dtm <- DocumentTermMatrix(Corpus(VectorSource(texts))) 

## making DTM a data.frame and adding variable categories 
df <- data.frame(categories, as.data.frame(inspect(dtm))) 

model <- svm(categories~., data=df) 

### III) Predicting class of new 

## creating dtm for new 
dtm_n <- DocumentTermMatrix(Corpus(VectorSource(new)), 
          ## without this line predict won't work 
          control=list(dictionary=names(df))) 
## creating data.frame for new 
df_n <- as.data.frame(inspect(dtm_n)) 

predict(model, df_n) 

## > 1 
## > Ham 
## > Levels: Ham Spam