當前位置:成語大全網 - 英語詞典 - ictclas4j如何自定義用戶詞典

ictclas4j如何自定義用戶詞典

下載 ictclas4j 看了下源碼,正找示例,org.ictclas4j.run.SegMain 可以運行。分詞的核心邏輯在org.ictclas4j.segment.Segment 的 split(String src) 方法中。運行 SegMain 的結果是壹串字符串(帶有詞性標註),細看了 Segment 與 org.ictclas4j.bean.SegResult 沒看到壹個個分好的詞。這樣就比較難以擴展成為 lucene 的分詞器。555,接下還是 hack 壹下。

hack 的突破口的它的最終結果,在 SegResult 類裏的 finalResult 字段記錄。 在Segment.split(String src) 生成。慢慢看代碼找到 outputResult(ArrayList<SegNode> wrList) 方法把壹個個分好的詞拼湊成 string。我們可以修改這個方法把壹個個分好的詞收集起來。下面是 hack 的過程。

1、修改 Segment:

1)把原來的outputResult(ArrayList<SegNode> wrList) 復制為 outputResult(ArrayList<SegNode> wrList, ArrayList<String> words) 方法,並添加收集詞的內容,最後為:

// 根據分詞路徑生成分詞結果

private String outputResult(ArrayList<SegNode> wrList, ArrayList<String> words) {

String result = null;

String temp=null;

char[] pos = new char[2];

if (wrList != null &amp;&amp; wrList.size() > 0) {

result = "";

for (int i = 0; i < wrList.size(); i++) {

SegNode sn = wrList.get(i);

if (sn.getPos() != POSTag.SEN_BEGIN &amp;&amp; sn.getPos() != POSTag.SEN_END) {

int tag = Math.abs(sn.getPos());

pos[0] = (char) (tag / 256);

pos[1] = (char) (tag % 256);

temp=""+pos[0];

if(pos[1]>0)

temp+=""+pos[1];

result += sn.getSrcWord() + "/" + temp + " ";

if(words != null) { //chenlb add

words.add(sn.getSrcWord());

}

}

}

}

return result;

}

2)原來的outputResult(ArrayList<SegNode> wrList) 改為:

//chenlb move to outputResult(ArrayList<SegNode> wrList, ArrayList<String> words)

private String outputResult(ArrayList<SegNode> wrList) {

return outputResult(wrList, null);

}

3)修改調用outputResult(ArrayList<SegNode> wrList)的地方(註意不是所有的調用),大概在 Segment 的126行 String optResult = outputResult(optSegPath); 改為 String optResult = outputResult(optSegPath, words); 當然還要定義ArrayList<String> words了,最終 Segment.split(String src) 如下:

public SegResult split(String src) {

SegResult sr = new SegResult(src);// 分詞結果

String finalResult = null;

if (src != null) {

finalResult = "";

int index = 0;

String midResult = null;

sr.setRawContent(src);

SentenceSeg ss = new SentenceSeg(src);

ArrayList<Sentence> sens = ss.getSens();

ArrayList<String> words = new ArrayList<String>(); //chenlb add

for (Sentence sen : sens) {

logger.debug(sen);

long start=System.currentTimeMillis();

MidResult mr = new MidResult();

mr.setIndex(index++);

mr.setSource(sen.getContent());

if (sen.isSeg()) {

// 原子分詞

AtomSeg as = new AtomSeg(sen.getContent());

ArrayList<Atom> atoms = as.getAtoms();

mr.setAtoms(atoms);

System.err.println("[atom time]:"+(System.currentTimeMillis()-start));

start=System.currentTimeMillis();

// 生成分詞圖表,先進行初步分詞,然後進行優化,最後進行詞性標記

SegGraph segGraph = GraphGenerate.generate(atoms, coreDict);

mr.setSegGraph(segGraph.getSnList());

// 生成二叉分詞圖表

SegGraph biSegGraph = GraphGenerate.biGenerate(segGraph, coreDict, bigramDict);

mr.setBiSegGraph(biSegGraph.getSnList());

System.err.println("[graph time]:"+(System.currentTimeMillis()-start));

start=System.currentTimeMillis();

// 求N最短路徑

NShortPath nsp = new NShortPath(biSegGraph, segPathCount);

ArrayList<ArrayList<Integer>> bipath = nsp.getPaths();

mr.setBipath(bipath);

System.err.println("[NSP time]:"+(System.currentTimeMillis()-start));

start=System.currentTimeMillis();

for (ArrayList<Integer> onePath : bipath) {

// 得到初次分詞路徑

ArrayList<SegNode> segPath = getSegPath(segGraph, onePath);

ArrayList<SegNode> firstPath = AdjustSeg.firstAdjust(segPath);

String firstResult = outputResult(firstPath);

mr.addFirstResult(firstResult);

System.err.println("[first time]:"+(System.currentTimeMillis()-start));

start=System.currentTimeMillis();

// 處理未登陸詞,進對初次分詞結果進行優化

SegGraph optSegGraph = new SegGraph(firstPath);

ArrayList<SegNode> sns = clone(firstPath);

personTagger.recognition(optSegGraph, sns);

transPersonTagger.recognition(optSegGraph, sns);

placeTagger.recognition(optSegGraph, sns);

mr.setOptSegGraph(optSegGraph.getSnList());

System.err.println("[unknown time]:"+(System.currentTimeMillis()-start));

start=System.currentTimeMillis();

// 根據優化後的結果,重新進行生成二叉分詞圖表

SegGraph optBiSegGraph = GraphGenerate.biGenerate(optSegGraph, coreDict, bigramDict);

mr.setOptBiSegGraph(optBiSegGraph.getSnList());

// 重新求取N-最短路徑

NShortPath optNsp = new NShortPath(optBiSegGraph, segPathCount);

ArrayList<ArrayList<Integer>> optBipath = optNsp.getPaths();

mr.setOptBipath(optBipath);

// 生成優化後的分詞結果,並對結果進行詞性標記和最後的優化調整處理

ArrayList<SegNode> adjResult = null;

for (ArrayList<Integer> optOnePath : optBipath) {

ArrayList<SegNode> optSegPath = getSegPath(optSegGraph, optOnePath);

lexTagger.recognition(optSegPath);

String optResult = outputResult(optSegPath, words); //chenlb changed

mr.addOptResult(optResult);

adjResult = AdjustSeg.finaAdjust(optSegPath, personTagger, placeTagger);

String adjrs = outputResult(adjResult);

System.err.println("[last time]:"+(System.currentTimeMillis()-start));

start=System.currentTimeMillis();

if (midResult == null)

midResult = adjrs;

break;

}

}

sr.addMidResult(mr);

} else {

midResult = sen.getContent();

words.add(midResult); //chenlb add

}

finalResult += midResult;

midResult = null;

}

sr.setWords(words); //chenlb add

sr.setFinalResult(finalResult);

DebugUtil.output2html(sr);

logger.info(finalResult);

}

return sr;

}

4)Segment中的構造方法,詞典路徑分隔可以改為"/"

5)同時修改了壹個漏詞的 bug,請看:ictclas4j的壹個bug

2、修改 SegResult:

添加以下內容:

private ArrayList<String> words; //記錄分詞後的詞結果,chenlb add

/**

* 添加詞條。

* @param word null 不添加

* @author chenlb 2009-1-21 下 午05:01:25

*/

public void addWord(String word) {

if(words == null) {

words = new ArrayList<String>();

}

if(word != null) {

words.add(word);

}

}

public ArrayList<String> getWords() {

return words;

}

public void setWords(ArrayList<String> words) {

this.words = words;

}

下面是創建 ictclas4j 的 lucene analyzer

1、新建壹個ICTCLAS4jTokenizer類:

package com.chenlb.analysis.ictclas4j;

import java.io.IOException;

import java.io.Reader;

import java.util.ArrayList;

import org.apache.lucene.analysis.Token;

import org.apache.lucene.analysis.Tokenizer;

import org.ictclas4j.bean.SegResult;

import org.ictclas4j.segment.Segment;

/**

* ictclas4j 切詞

*

* @author chenlb 2009-1-23 上午11:39:10

*/

public class ICTCLAS4jTokenizer extends Tokenizer {

private static Segment segment;

private StringBuilder sb = new StringBuilder();

private ArrayList<String> words;

private int startOffest = 0;

private int length = 0;

private int wordIdx = 0;

public ICTCLAS4jTokenizer() {

words = new ArrayList<String>();

}

public ICTCLAS4jTokenizer(Reader input) {

super(input);

char[] buf = new char[8192];

int d = -1;

try {

while((d=input.read(buf)) != -1) {

sb.append(buf, 0, d);

}

} catch (IOException e) {

e.printStackTrace();

}

SegResult sr = seg().split(sb.toString()); //分詞

words = sr.getWords();

}

public Token next(Token reusableToken) throws IOException {

assert reusableToken != null;

length = 0;

Token token = null;

if(wordIdx < words.size()) {

String word = words.get(wordIdx);

length = word.length();

token = reusableToken.reinit(word, startOffest, startOffest+length);

wordIdx++;

startOffest += length;

}

return token;

}

private static Segment seg() {

if(segment == null) {

segment = new Segment(1);

}

return segment;

}

}

2、新建壹個ICTCLAS4jFilter類:

package com.chenlb.analysis.ictclas4j;

import org.apache.lucene.analysis.Token;

import org.apache.lucene.analysis.TokenFilter;

import org.apache.lucene.analysis.TokenStream;

/**

* 標點符等, 過慮.

*

* @author chenlb 2009-1-23 下午03:06:00

*/

public class ICTCLAS4jFilter extends TokenFilter {

protected ICTCLAS4jFilter(TokenStream input) {

super(input);

}

public final Token next(final Token reusableToken) throws java.io.IOException {

assert reusableToken != null;

for (Token nextToken = input.next(reusableToken); nextToken != null; nextToken = input.next(reusableToken)) {

String text = nextToken.term();

switch (Character.getType(text.charAt(0))) {

case Character.LOWERCASE_LETTER:

case Character.UPPERCASE_LETTER:

// English word/token should larger than 1 character.

if (text.length()>1) {

return nextToken;

}

break;

case Character.DECIMAL_DIGIT_NUMBER:

case Character.OTHER_LETTER:

// One Chinese character as one Chinese word.

// Chinese word extraction to be added later here.

return nextToken;

}

}

return null;

}

}

3、新建壹個ICTCLAS4jAnalyzer類:

package com.chenlb.analysis.ictclas4j;

import java.io.Reader;

import org.apache.lucene.analysis.Analyzer;

import org.apache.lucene.analysis.LowerCaseFilter;

import org.apache.lucene.analysis.StopFilter;

import org.apache.lucene.analysis.TokenStream;

/**

* ictclas4j 的 lucene 分析器

*

* @author chenlb 2009-1-23 上午 11:39:39

*/

public class ICTCLAS4jAnalyzer extends Analyzer {

private static final long serialVersionUID = 1L;

// 可以自定義添加更多的過慮的詞(高頻無多太用處的詞)

private static final String[] STOP_WORDS = {

"and", "are", "as", "at", "be", "but", "by",

"for", "if", "in", "into", "is", "it",

"no", "not", "of", "on", "or", "such",

"that", "the", "their", "then", "there", "these",

"they", "this", "to", "was", "will", "with",

"的"

};

public TokenStream tokenStream(String fieldName, Reader reader) {

TokenStream result = new ICTCLAS4jTokenizer(reader);

result = new ICTCLAS4jFilter(new StopFilter(new LowerCaseFilter(result), STOP_WORDS));

return result;

}

}

下面來測試下分詞效果:

文本內容:

京華時報1月23日報道 昨天,受壹股來自中西伯利亞的強冷空氣影響,本市出現大風降溫天氣,白天最高氣溫只有零下7攝氏度,同時伴有6到7級的偏北風。

原分詞結果:

京華/nz 時/ng 報/v 1月/t 23日/t 報道/v 昨天/t ,/w 受/v 壹/m 股/q 來自/v 中/f 西伯利亞/ns 的/u 強/a 冷空氣/n 影響/vn ,/w 本市/r 出現/v 大風/n 降溫/vn 天氣/n ,/w 白天/t 最高/a 氣溫/n 只/d 有/v 零下/s 7/m 攝氏度/q ,/w 同時/c 伴/v 有/v 6/m 到/v 7/m 級/q 的/u 偏/a 北風/n 。/w

analyzer:

[京華] [時] [報] [1月] [23日] [報道] [昨天] [受] [壹] [股] [來自] [中] [西伯利亞] [強] [冷空氣] [影響] [本市] [出現] [大風] [降溫] [天氣] [白天] [最高] [氣溫] [只] [有] [零下] [7] [攝氏度] [同時] [伴] [有] [6] [到] [7] [級] [偏] [北風]