Reputation: 75
I am a beginner with elasticsearch and i have to write 1-million random events into an Elastic search cluster (hosted on the cloud), with a python script...
es = Elasticsearch(
[host_name],
port=9243,
http_auth=("*****","*******"),
use_ssl=True,
verify_certs=True,
ca_certs=certifi.where(),
sniff_on_start=True
)
Here's my code for the indexing:
for i in range(1000000):
src_centers=['data center a','data center b','data center c','data center d','data center e']
transfer_src = np.random.choice(src_centers, p=[0.3, 0.175, 0.175, 0.175, 0.175])
dst_centers = [x for x in src_centers if x != transfer_src]
transfer_dst = np.random.choice(dst_centers)
final_transfer_status = ['transfer-success','transfer-failure']
transfer_starttime = generate_timestamp()
file_size=random.choice(range(1024,10000000000))
ftp={
'event_type': 'transfer-queued',
'uuid': uuid.uuid4(),
'src_site' : transfer_src,
'dst_site' : transfer_dst,
'timestamp': transfer_starttime,
'bytes' : file_size
}
print(i)
es.index(index='ft_initial', id=(i+1), doc_type='initial_transfer_details', body= ftp)
transfer_status = ['transfer-success', 'transfer-failure']
final_status = np.random.choice(transfer_status, p=[0.95,0.05])
ftp['event_type'] = final_status
if (final_status=='transfer-failure'):
time_delay = 10
else :
time_delay = int(transfer_time(file_size)) # ranges roughly from 0-10000 s
ftp['timestamp'] = transfer_starttime + timedelta(seconds=time_delay)
es.index(index='ft_final', id=(i+1), doc_type='final_transfer_details', body=ftp)
Is there any alternate way to speed up the process??
Any help/pointers will be appreciated. Thanks.
Upvotes: 3
Views: 1205
Reputation: 10859
Upvotes: 3