导航:首页 > 编程语言 > 如何用python提取词频率

如何用python提取词频率

发布时间:2022-04-30 17:54:28

❶ 一个txt文档,已经用结巴分词分完词,怎么用python工具对这个分完词的文档进行计算统计词频,求脚本,非

#!/usr/bin/envpython3
#-*-coding:utf-8-*-

importos,random

#假设要读取文件名为aa,位于当前路径
filename='aa.txt'
dirname=os.getcwd()
f_n=os.path.join(dirname,filename)
#注释掉的程序段,用于测试脚本,它生成20行数据,每行有1-20随机个数字,每个数字随机1-20
'''
test=''
foriinrange(20):
forjinrange(random.randint(1,20)):
test+=str(random.randint(1,20))+''
test+=' '
withopen(f_n,'w')aswf:
wf.write(test)
'''
withopen(f_n)asf:
s=f.readlines()

#将每一行数据去掉首尾的空格和换行符,然后用空格分割,再组成一维列表
words=[]
forlineins:
words.extend(line.strip().split(''))

#格式化要输出的每行数据,首尾各占8位,中间占18位
defgeshi(a,b,c):
returnalignment(str(a))+alignment(str(b),18)+alignment(str(c))+' '
#中英文混合对齐,参考http://bbs.fishc.com/thread-67465-1-1.html,二楼
#汉字与字母格式化占位format对齐出错对不齐汉字对齐数字汉字对齐字母中文对齐英文
#alignment函数用于英汉混合对齐、汉字英文对齐、汉英对齐、中英对齐
defalignment(str1,space=8,align='left'):
length=len(str1.encode('gb2312'))
space=space-lengthifspace>=lengthelse0
ifalignin['left','l','L','Left','LEFT']:
str1=str1+''*space
elifalignin['right','r','R','Right','RIGHT']:
str1=''*space+str1
elifalignin['center','c','C','Center','CENTER','centre']:
str1=''*(space//2)+str1+''*(space-space//2)
returnstr1

w_s=geshi('序号','词','频率')
#由(词,频率)元组构成列表,先按频率降序排序,再按词升序排序,多级排序,一组升,一组降,高级sorted
wordcount=sorted([(w,words.count(w))forwinset(words)],key=lambdal:(-l[1],l[0]))
#要输出的数据,每一行由:序号(占8位)词(占20位)频率(占8位)+' '构成,序号=List.index(element)+1
for(w,c)inwordcount:
w_s+=geshi(wordcount.index((w,c))+1,w,c)
#将统计结果写入文件ar.txt中
writefile='ar.txt'
w_n=os.path.join(dirname,writefile)
withopen(w_n,'w')aswf:
wf.write(w_s)

❷ Python!有2个txt中文文档,1是课文,2是一些词语(用单个空格分割)。如何统计2中单词在1中出现的频率

需要做分词,然后进行匹配。

分词第三方库,可用jieba分词,很好安装。


用jieba将1文档进行拆分,生成中间文件mid.txt。mid.txt应该是字典还是列表,我忘记了。。。

然后进行遍历和统计就好了。


注意,jieba分词模式有多种,要选择不存在重复的那个模式。例如一个句子"好久不见":

不同的分词模式可能分成以下几种:

  1. ["好",”久“,”不“, ”见“, ”好久“, ”不见“,”久不“。。。]

  2. ["好久", ”不见“]

    如果选错模式,你匹配”好”,就可能出现不同情况

❸ 如何用python爬虫爬取出现频率最高的词

完全可以,
可以参考 python爬虫联想词视频 先学习一下基础知识。

❹ 如何用python统计六级词汇频率

不知道你用什么作为统计的资料
本文实例讲述了python统计文本字符串里单词出现频率的方法。分享给大家供大家参考。具体实现方法如下:
# word frequency in a text
# tested with Python24 vegaseat 25aug2005
# Chinese wisdom ...
str1 = """Man who run in front of car, get tired.
Man who run behind car, get exhausted."""
print "Original string:"
print str1
print
# create a list of words separated at whitespaces
wordList1 = str1.split(None)
# strip any punctuation marks and build modified word list
# start with an empty list
wordList2 = []
for word1 in wordList1:
# last character of each word
lastchar = word1[-1:]
# use a list of punctuation marks
if lastchar in [",", ".", "!", "?", ";"]:
word2 = word1.rstrip(lastchar)
else:
word2 = word1
# build a wordList of lower case modified words
wordList2.append(word2.lower())
print "Word list created from modified string:"
print wordList2
print
# create a wordfrequency dictionary
# start with an empty dictionary
freqD2 = {}
for word2 in wordList2:
freqD2[word2] = freqD2.get(word2, 0) + 1
# create a list of keys and sort the list
# all words are lower case already
keyList = freqD2.keys()
keyList.sort()
print "Frequency of each word in the word list (sorted):"
for key2 in keyList:
print "%-10s %d" % (key2, freqD2[key2])

希望本文所述对大家的Python程序设计有所帮助。

❺ 如何用Python提取中文关键词

去非中文字符
分词
统计
提取

❻ 如何用python统计单词的频率

代码:

passage="""Editor’s Note: Looking through VOA's listener mail, we came across a letter that asked a simple question. "What do Americans think about China?" We all care about the perceptions of others. It helps us better understand who we are. VOA Reporter Michael Lipin begins a series providing some answers to our listener's question. His assignment: present a clearer picture of what Americans think about their chief world rival, and what drives those perceptions.

Two common American attitudes toward China can be identified from the latest U.S. public opinion surveys published by Gallup and Pew Research Center in the past year.

First, most of the Americans surveyed have unfavorable opinions of China as a whole, but do not view the country as a threat toward the United States at the present time.

Second, most survey respondents expect China to pose an economic and military threat to the United States in the future, with more Americans worried about the perceived economic threat than the military one.

Most Americans view China unfavorably

To understand why most Americans appear to have negative feelings about China, analysts interviewed by VOA say a variety of factors should be considered. Primary among them is a lack of familiarity.

"Most Americans do not have a strong interest in foreign affairs, Chinese or otherwise," says Robert Daly, director of the Kissinger Institute on China and the United States at the Washington-based Wilson Center.

Many of those Americans also have never traveled to China, in part because of the distance and expense. "That means that like most human beings, they take short cuts to understanding China," Daly says.

Rather than make the effort to regularly consume a wide range of U.S. media reports about China, analysts say many Americans base their views on widely-publicized major events in China's recent history."""

passage=passage.replace(","," ").replace("."," ").replace(":"," ").replace("’","'").

replace('"'," ").replace("?"," ").replace("!"," ").replace(" "," ")#把标点改成空格

passagelist=passage.split(" ")#拆分成一个个单词

pc=passagelist.()#复制一份

for i in range(len(pc)):

pi=pc[i]#这一个字符串

if pi.count(" ")==len(pi):#如果全是空格

passagelist.remove(pi)#删除此项

worddict={}

for j in range(len(passagelist)):

pj=passagelist[j]#这一个单词

if pj not in worddict:#如果未被统计到

worddict[pj]=1#增加单词统计,次数设为1

else:#如果统计过了

worddict[pj]+=1#次数增加1

output=""#按照字母表顺序,制表符

worddictlist=list(worddict.keys())#提取所有的单词

worddictlist.sort()#排序(但大小写会出现问题)

worddict2={}

for k in worddictlist:

worddict2[k]=worddict[k]#排序好的字典

print("单次 次数")

for m in worddict2:#遍历输出

tabs=(23-len(m))//8#根据单次长度输入,如果复制到表格,请把此行改为tabs=2

print("%s%s%d"%(m," "*tabs,worddict[m]))

注:加粗部分是您要统计的短文,请修改。我这里的输出效果是:

American 1

Americans 9

Center 2

China 10

China's 1

Chinese 1

Daly 2

Editor's 1

First 1

Gallup 1

His 1

Institute 1

It 1

Kissinger 1

Lipin 1

Looking 1

Many 1

Michael 1

Most 2

Note 1

Pew 1

Primary 1

Rather 1

Reporter 1

Research 1

Robert 1

S 2

Second 1

States 3

That 1

To 1

Two 1

U 2

United 3

VOA 2

VOA's 1

Washington-based1

We 1

What 1

Wilson 1

a 10

about 6

across 1

affairs 1

all 1

also 1

among 1

an 1

analysts 2

and 5

answers 1

appear 1

are 1

as 2

asked 1

assignment 1

at 2

attitudes 1

base 1

be 2

because 1

begins 1

beings 1

better 1

but 1

by 2

came 1

can 1

care 1

chief 1

clearer 1

common 1

considered 1

consume 1

country 1

cuts 1

director 1

distance 1

do 3

drives 1

economic 2

effort 1

events 1

expect 1

expense 1

factors 1

familiarity 1

feelings 1

foreign 1

from 1

future 1

have 4

helps 1

history 1

human 1

identified 1

in 5

interest 1

interviewed 1

is 1

lack 1

latest 1

letter 1

like 1

listener 1

listener's 1

mail 1

major 1

make 1

many 1

means 1

media 1

military 2

more 1

most 4

negative 1

never 1

not 2

of 10

on 2

one 1

opinion 1

opinions 1

or 1

others 1

otherwise 1

our 1

part 1

past 1

perceived 1

perceptions 2

picture 1

pose 1

present 2

providing 1

public 1

published 1

question 2

range 1

recent 1

regularly 1

reports 1

respondents 1

rival 1

say 2

says 2

series 1

short 1

should 1

simple 1

some 1

strong 1

survey 1

surveyed 1

surveys 1

take 1

than 2

that 2

the 16

their 2

them 1

they 1

think 2

those 2

threat 3

through 1

time 1

to 7

toward 2

traveled 1

understand 2

understanding 1

unfavorable 1

unfavorably 1

us 1

variety 1

view 2

views 1

we 2

what 2

who 1

whole 1

why 1

wide 1

widely-publicized1

with 1

world 1

worried 1

year 1

(应该是对齐的,到这就乱了)

注:目前难以解决的漏洞

1、大小写问题,无法分辨哪些必须大写哪些只是首字母大写

2、's问题,目前如果含有只能算为一个单词里的

3、排序问题,很难做到按照出现次数排序

❼ 请问如何用python提取出一个txt文件中词频最高的二十个词语并从大到小输出

❽ python如何实现提取文本中所有连续的词语

经常需要通过Python代码来提取文本的关键词,用于文本分析。而实际应用中文本量又是大量的数据,如果使用单进程的话,效率会比较低,因此可以考虑使用多进程。
python的多进程只需要使用multiprocessing的模块就行,如果使用大量的进程就可以使用multiprocessing的进程池--Pool,然后不同进程处理时使用apply_async函数进行异步处理即可。

实验测试语料:message.txt中存放的581行文本,一共7M的数据,每行提取100个关键词。
代码如下:

[python] view plain
#coding:utf-8
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
from multiprocessing import Pool,Queue,Process
import multiprocessing as mp
import time,random
import os
import codecs
import jieba.analyse
jieba.analyse.set_stop_words("yy_stop_words.txt")

def extract_keyword(input_string):
#print("Do task by process {proc}".format(proc=os.getpid()))
tags = jieba.analyse.extract_tags(input_string, topK=100)
#print("key words:{kw}".format(kw=" ".join(tags)))
return tags

#def parallel_extract_keyword(input_string,out_file):
def parallel_extract_keyword(input_string):
#print("Do task by process {proc}".format(proc=os.getpid()))
tags = jieba.analyse.extract_tags(input_string, topK=100)
#time.sleep(random.random())
#print("key words:{kw}".format(kw=" ".join(tags)))
#o_f = open(out_file,'w')
#o_f.write(" ".join(tags)+"\n")
return tags
if __name__ == "__main__":

data_file = sys.argv[1]
with codecs.open(data_file) as f:
lines = f.readlines()
f.close()

out_put = data_file.split('.')[0] +"_tags.txt"
t0 = time.time()
for line in lines:
parallel_extract_keyword(line)
#parallel_extract_keyword(line,out_put)
#extract_keyword(line)
print("串行处理花费时间{t}".format(t=time.time()-t0))

pool = Pool(processes=int(mp.cpu_count()*0.7))
t1 = time.time()
#for line in lines:
#pool.apply_async(parallel_extract_keyword,(line,out_put))
#保存处理的结果,可以方便输出到文件
res = pool.map(parallel_extract_keyword,lines)
#print("Print keywords:")
#for tag in res:
#print(" ".join(tag))

pool.close()
pool.join()
print("并行处理花费时间{t}s".format(t=time.time()-t1))

运行:
python data_process_by_multiprocess.py message.txt
message.txt是每行是一个文档,共581行,7M的数据

运行时间:

不使用sleep来挂起进程,也就是把time.sleep(random.random())注释掉,运行可以大大节省时间。

❾ PYTHON语言如何取到声音的频率(其他语言也可行)

先得到时域信号,然后做傅立叶变换,得到频谱。
感觉题主可能对python比较熟悉?那就别换语言了。稍微网络谷歌以下肯定能找到python的傅立叶变换的库。

❿ 怎样用python进行关键词提取

关键字具体是什么?
字符串比对就行了
html是beautifulsoup或者正则
json就更简单了

阅读全文

与如何用python提取词频率相关的资料

热点内容
数学奇迹神奇运算法 浏览:359
大厂的程序员的水平如何 浏览:700
遗传算法入门经典书籍 浏览:878
源码炮台脚本 浏览:620
在位编辑命令 浏览:347
曲式分析基础教程pdf 浏览:14
php生成静态html页面 浏览:964
怎么分割pdf 浏览:813
压缩垃圾报警器 浏览:629
小公司一般都用什么服务器 浏览:968
java获取时间gmt时间 浏览:821
为什么csgo一直连接不到服务器 浏览:504
安卓登ins需要什么 浏览:836
机器人算法的难点 浏览:226
全自动化编程 浏览:728
程序员高薪限制 浏览:693
压缩图片压缩 浏览:75
美国发明解压魔方 浏览:302
电脑怎么备案网上服务器 浏览:515
旅行商问题Python写法 浏览:954