Reputation: 31
I'm a beginner in Python and am working on a project to preprocess Japanese text data for argument mining. I need to extract metadata (e.g., parliamentary session, date, speaker) and the speech content from the text, then save it in a JSON file.
Each speech in my dataset typically begins with the symbol "○" followed by the speaker's name or position, and then the speech content.
It looks like this (I abridged the original text as it is too voluminous):
第120回国会 参議院 予算委員会公聴会 第1号 平成3年4月2日
平成三年四月二日(火曜日)
午前十時開会
─────────────
委員の異動
四月一日
辞任 補欠選任
合馬 敬君 平野 清君
─────────────
本日の会議に付した案件
○平成三年度一般会計予算(内閣提出、衆議院送付)
○平成三年度特別会計予算(内閣提出、衆議院送付)
○平成三年度政府関係機関予算(内閣提出、衆議院送付)
─────────────
○公述人(北岡伸一君) これはできるところからやっていくということであろうと思います。
○斎藤文夫君 もう時間がございませんので、両先生ありがとうございました。
○佐藤三吾君 早朝からお招きして、両公述人の先生には大変失礼なことをして、同僚としてもおわびを申し上げたいと思います。
The session number here is 120, the chamber is 参議院
Speakers here are:
○公述人(北岡伸一君)
○斎藤文夫君
○佐藤三吾君
The speeches follow the speakers' names.
I expect to receive something like this as a json file:
"Date": "26 Mar 1992",
"Session": "Session 123",
"Chamber": "参議院",
"Speaker": "公述人 (一河秀洋君)",
"Content": "中央大学の一河でございます。 本日は、諸先生の前で平成四年度の予算について愚見を申し上げる機会をいただいたことを大変光栄に存じております。...
I initially wrote a simple regex to handle this, and it worked well for some files, but not consistently across all of them. The current version of my code does not capture all of the speakers and speeches and sometimes includes procedural data (metadata).
By "procedural data," I mean text segments in the transcripts that are not meaningful speeches but rather administrative or procedural content. This could include things like session headers, attendee lists, time indications, or committee-related terms. For example, you might see something like this:
─────────────
本日の会議に付した案件
○平成三年度一般会計予算(内閣提出、衆議院送付)
○平成三年度特別会計予算(内閣提出、衆議院送付)
○平成三年度政府関係機関予算(内閣提出、衆議院送付)
The English version would be:
─────────────
Agenda Items for Today's Meeting
○ FY1991 General Account Budget (Submitted by the Cabinet, Sent by the House of Representatives)
○ FY1991 Special Account Budget (Submitted by the Cabinet, Sent by the House of Representatives)
○ FY1991 Government-Related Agency Budget (Submitted by the Cabinet, Sent by the House of Representatives)
Although I am getting the result in the format as described above, instead of let's say 100 speeches I receive 60. Likewise, sometimes I receive something like:
"Date": "26 Mar 1992",
"Session": "Session 123",
"Chamber": "参議院",
"Speaker": "平成三年度特別会計予算(内閣提出、衆議院送付)",
"Content": "付)"
I've identified a specific file that serves as a good benchmark for testing my code.
It is accessible via this link: https://kokkai.ndl.go.jp/#/detail?minId=112315262X00119920326¤t=19 (I loaded and have used it as a text file)
This is my code:
import os
import re
import json
from datetime import datetime
import argparse
def convert_japanese_date_to_english(date_str):
# Convert date string from 'YYYYMMDD' to 'DD Mon YYYY'
return datetime.strptime(date_str, '%Y%m%d').strftime('%d %b %Y')
def extract_session_and_chamber(first_line):
# Extract session number and chamber (衆議院 or 参議院) from the first line of the transcript
session_match = re.search(
r'第(\d+)回国会\s*(衆議院|参議院).*?(?:第(\d+)号)',
first_line
)
if session_match:
session_number = session_match.group(1)
chamber = session_match.group(2)
return f'Session {session_number}', chamber
return 'Unknown Session', 'Unknown Chamber'
def extract_speaker_and_content(segment):
# Extract speaker name with role (including the case with ○)
match = re.match(r'○([^(\s]+)\s*(?:[(\(](.*?)[)\)])?\s*(.+)', segment, re.DOTALL)
if match:
speaker_name = match.group(1).strip()
role = match.group(2).strip() if match.group(2) else ""
speaker = f"{speaker_name} ({role})" if role else speaker_name
content = match.group(3).strip()
return speaker, content
# Check for items starting with ○
match_simple = re.match(r'○([^ ]+)\s*(.+)', segment, re.DOTALL)
if match_simple:
speaker = match_simple.group(1).strip()
content = match_simple.group(2).strip()
return speaker, content
# If neither pattern matches, return None for both speaker and content
return None, None
def is_metadata_or_procedural(segment):
# Identify if the segment is metadata, such as session info, attendee lists, or procedural content
metadata_patterns = [
r'第\d+回国会', # Session headers
r'参議院|衆議院', # Chamber names
r'午前|午後', # Time indications
r'委員長|理事|委員|出席者|政府委員', # Committee-related terms
r'辞任|補欠選任', # Procedural terms related to appointments
r'議案|案件', # Agenda items
r'―――――――――――――', # Procedural separators
]
# Return True if the segment matches any metadata patterns
for pattern in metadata_patterns:
if re.search(pattern, segment):
return True
return False
def clean_content(content):
# Clean the speech content by removing unnecessary symbols and normalizing text
content = re.sub(r'―+.*?―+', '', content) # Remove procedural markers
content = re.sub(r'〔[^〕]*〕', '', content) # Remove content in curly brackets
content = re.sub(r'\(.*?\)', '', content) # Remove content in round brackets
content = re.sub(r'([^)]*)', '', content) # Remove content in Japanese-style parentheses
content = re.sub(r'\s+', ' ', content) # Normalize whitespace to single spaces
# Trim leading and trailing whitespace
return content.strip()
def process_speeches(file_path, output_dir):
# Process a single speech transcript file and save the cleaned data to a JSON file
file_name = os.path.basename(file_path).split('.')[0]
date_part = re.match(r'\d{8}', file_name).group(0)
date = convert_japanese_date_to_english(date_part)
with open(file_path, 'r', encoding='utf-8') as file:
lines = file.readlines()
session_info, chamber = extract_session_and_chamber(lines[0])
speech = ''.join(lines)
segments = re.split(r'(?=○)', speech)
processed_data = []
for segment in segments:
segment = segment.strip()
if segment and not is_metadata_or_procedural(segment):
speaker, content = extract_speaker_and_content(segment)
if speaker and content: # Only process if both speaker and content are present
cleaned_content = clean_content(content)
if cleaned_content:
entry = {
'Date': date,
'Session': session_info,
'Chamber': chamber,
'Speaker': speaker,
'Content': cleaned_content
}
processed_data.append(entry)
output_file = os.path.join(output_dir, f'processed_{file_name}.json')
with open(output_file, 'w', encoding='utf-8') as jsonfile:
json.dump(processed_data, jsonfile, ensure_ascii=False, indent=4)
return output_file
def process_all_speeches(input_dir, output_dir):
# Process all speech transcript files in the input directory
if not os.path.exists(output_dir):
os.makedirs(output_dir)
preprocessed_files = [f for f in os.listdir(input_dir) if f.endswith('.txt')]
processed_files = []
for file_name in preprocessed_files:
input_path = os.path.join(input_dir, file_name)
output_file = process_speeches(input_path, output_dir)
processed_files.append(output_file)
return processed_files
def main():
# Main entry point for the script
parser = argparse.ArgumentParser(description='Process Japanese speech transcripts.')
parser.add_argument('input_dir', help='Directory containing input .txt files')
parser.add_argument('output_dir', help='Directory to save processed .json files')
args = parser.parse_args()
process_all_speeches(args.input_dir, args.output_dir)
if __name__ == '__main__':
main()
Could someone help me with this code or suggest a better approach to reliably extract the speech content? Any guidance would be greatly appreciated!
Upvotes: 3
Views: 77
Reputation: 5308
General recommendation
The problem is that regexes are not able by themselves to understand what a speech is compared to some other kind of data. So in the end, you need to code some extra logic in order to tell apart all those cases.
You are already doing this with the code you posted, but It seems you are getting some false negatives (missing speeches). Best recommendation I can give you is to try to improve your code by figuring out a way to tell apart every case. If there's a way, you need to find it. If there's no way, it is inevitable to find false positives/negatives from time to time.
I also recommend you using unit testing on your code. That way, when you find a false positive/negative, you can modify your code and a new test. So you'll be sure that future modifications will not break your code.
Other ideas for your code
Now, more specifically and looking at the file you provided, there exist some unexplored ideas you may want to try. Of course, the success of these ideas will depend on the structure of other files, since following recommendations are based only on the lonely file I could inspect.
All are based on the idea of splitting the text on the "-------" separators. By doing this, you will have a list of different blocks.
So it seems that the first block that has circles within is always procedural data, and the following ones are speeches. At least this is what it seems according to the file I inspected. So the first time you find the circles on a block, you know that block is procedural.
If that is not always true, you still have some other options. May be the procedural data is always structured in the same way:
Some Phrase...
○ a name (other data)
○ another name (other data)
...
If that block has an structure like that, it is procedural.
A regex to detect those cases could be something similar to this (remember this should be applied to just the block, not the whole file)
\A\s+\w+\s+(?:○\w+[((][^))]+[))]\s*)+
See: https://regex101.com/r/S3xb7t/1
If that block matches, it is procedural.
You may also get false positives, in which case it may help to inspect the beginning of the phrase (see first capturing group) and see if it has some allowed words such: "items", "today", "agenda", or something like that.
Upvotes: 0