因此,我试图以一种非常规的方式使用reduce,lambda和readlines计算文件中的单词数:
import functools as ft
f=open("test_file.txt")
words=ft.reduce(lambda a,b:(len(a.split())+len(b.split())),f.readlines())
print(words)
由于Im试图分割Integers(Indices),因此会引发属性错误。我如何获得此代码来拆分f.readlines()返回的iterable的元素,并相继添加其长度(即这些行中的单词数)以最终计算文件中的单词总数?
If you're trying get a count of words in a file,
f.read()
makes more sense thanf.readlines()
because it obviates the need sum line-by-line counts. You get the whole file in a chunk and can then split on whitespace usingsplit
without arguments.If you really want to use
readlines
, it's easier to avoidfunctools.reduce
in any event andsum
the lengths of thesplit
lines:It's good practice to use
with
context manager so your resource is automatically closed. Use space around all operators so the code is readable.As for getting
functools.reduce
to work: it accepts a lambda which passes the accumulator as its first arg and the current element as the second. The second argument tofunctools.reduce
is an iterable and the third initializes the accumulator. Leaving it blank as you've done sets it to the value of the first item in the iterable--probably not what you want, since the idea is to perform a numerical summation using the accumulator.您可以使用
但这是让我印象深刻的鲁贝·戈德伯格式解决问题的方法。