How to get the name of input file in MRjob
-
20-06-2021 - |
Frage
I'm writing a map function using mrjob. My input will come from files in a directory on HDFS. Names of the files contain a small but crucial piece information that is not present in the files. Is there a way to learn (inside a map function) the name of the input file from which a given key-value pair comes?
I'm looking for an equivalent of this Java code:
FileSplit fileSplit = (FileSplit)reporter.getInputSplit();
String fileName = fileSplit.getPath().getName();
Thanks in advance!
Lösung
map.input.file
property will give the input file name.
According to the Hadoop - The Definitive Guide
The properties can be accessed from the job’s configuration, obtained in the old MapReduce API by providing an implementation of the configure() method for Mapper or Reducer, where the configuration is passed in as an argument. In the new API, these properties can be accessed from the context object passed to all methods of the Mapper or Reducer.
Andere Tipps
If you are using HADOOP 2.x with Python:
file_name = os.environ['mapreduce_map_input_file']