如何使用lambda来实现程序逻辑?

因此,我试图以一种非常规的方式使用reduce,lambda和readlines计算文件中的单词数:

import functools as ft
f=open("test_file.txt")
words=ft.reduce(lambda a,b:(len(a.split())+len(b.split())),f.readlines())
print(words)

由于Im试图分割Integers(Indices),因此会引发属性错误。我如何获得此代码来拆分f.readlines()返回的iterable的元素,并相继添加其长度(即这些行中的单词数)以最终计算文件中的单词总数?

评论
  • pdolor
    pdolor 回复

    If you're trying get a count of words in a file, f.read() makes more sense than f.readlines() because it obviates the need sum line-by-line counts. You get the whole file in a chunk and can then split on whitespace using split without arguments.

    >>> with open("foo.py") as f:
    ...     len(f.read().split())
    ...
    1530
    

    If you really want to use readlines, it's easier to avoid functools.reduce in any event and sum the lengths of the split lines:

    >>> with open("foo.py") as f:
    ...     sum(len(x.split()) for x in f.readlines())
    ...
    1530
    

    It's good practice to use with context manager so your resource is automatically closed. Use space around all operators so the code is readable.

    As for getting functools.reduce to work: it accepts a lambda which passes the accumulator as its first arg and the current element as the second. The second argument to functools.reduce is an iterable and the third initializes the accumulator. Leaving it blank as you've done sets it to the value of the first item in the iterable--probably not what you want, since the idea is to perform a numerical summation using the accumulator.

    您可以使用

    >>> with open("foo.py") as f:
    ...     ft.reduce(lambda acc, line: len(line.split()) + acc, f.readlines(), 0)
    ...
    1530
    

    但这是让我印象深刻的鲁贝·戈德伯格式解决问题的方法。