Reputation: 13
I have a requirement in which I need to update the existing records with new values/insert the new records by converting the data into JSON via Oracle backend(pl/sql). These JSON values need to be posted on URL.I am able to post sample values on the URL via Oracle but before updating/inserting any new values, It throws a mutating error in the trigger when I try to read the table..Can you please post some sample code for this?
Thanks in advance
CREATE OR REPLACE TRIGGER get_employees_trig before
INSERT ON emp_table REFERENCING OLD AS OLD NEW AS NEW FOR EACH
ROW DECLARE V_RET VARCHAR2(100);
BEGIN
dbms_output.put_line('fired-0');
IF :NEW.id <>:OLD.id THEN
V_RET := post_json_data_fnc(:NEW.id);
dbms_output.put_line('fired-1');
END IF;
dbms_output.put_line('fired-2');
END;
create or replace FUNCTION post_json_data_fnc(
p_id IN NUMBER)
RETURN CLOB
IS
req utl_http.req;
res utl_http.resp;
url varchar2(200);
l_clob CLOB;
l_xml CLOB;
l_txt CLOB;
--content varchar2(4000) := '{"name":"u14", "pass": "123","mail": "[email protected]"}'; BEGIN
URL := 'http://10.54.8.210:9200/temp/20';
l_xml := json_util_pkg.ref_cursor_to_json (emp_spec.get_employees(p_id));
req := utl_http.begin_request(URL, 'POST',' HTTP/1.1');
utl_http.set_header(req, 'user-agent', 'mozilla/4.0');
utl_http.set_header(req, 'content-type', 'application/json');
utl_http.set_header(req, 'Content-Length', LENGTH(l_xml));
l_txt := l_xml;
utl_http.write_text(req, l_txt);
res := utl_http.get_response(req);
utl_http.read_text(res,l_txt);
UTL_HTTP.END_RESPONSE(res);
dbms_output.put_line(l_txt);
return l_txt;
EXCEPTION
WHEN UTL_HTTP.END_OF_BODY THEN
UTL_HTTP.END_RESPONSE(res);
END;
create or replace FUNCTION post_json_data_fnc(
p_id IN NUMBER)
RETURN CLOB
IS
req utl_http.req;
res utl_http.resp;
url varchar2(200);
l_clob CLOB;
l_xml CLOB;
l_txt CLOB;
BEGIN
URL := 'http';
l_xml := json_util_pkg.ref_cursor_to_json (emp_spec.get_employees(p_id));
req := utl_http.begin_request(URL, 'POST',' HTTP/1.1');
utl_http.set_header(req, 'user-agent', 'mozilla/4.0');
utl_http.set_header(req, 'content-type', 'application/json');
utl_http.set_header(req, 'Content-Length', LENGTH(l_xml));
l_txt := l_xml;
utl_http.write_text(req, l_txt);
res := utl_http.get_response(req);
utl_http.read_text(res,l_txt);
UTL_HTTP.END_RESPONSE(res);
dbms_output.put_line(l_txt);
return l_txt;
EXCEPTION
WHEN UTL_HTTP.END_OF_BODY THEN
UTL_HTTP.END_RESPONSE(res);
END;
Upvotes: 0
Views: 2044
Reputation: 231881
First, the mutating table exception is being thrown because you cannot in general query the table on which a row-level trigger is defined (emp_table
in this case) from within the trigger or within code called by that trigger. My assumption is that something in the json_util_pkg.ref_cursor_to_json (emp_spec.get_employees(p_id))
call is querying emp_table
. If you really, really wanted to do this from within a row-level trigger, you would need to remove any code that queries emp_table
and recreate it using just the :new
and :old
pseudorecords.
However, you almost certainly don't want to do this or anything non-transactional in a trigger. What happens, for example, if your transaction is rolled back? Do you really want to have passed values to the web service that never actually existed in the table? What happens if Oracle does a partial or complete rollback and then re-executes the transaction for write consistency? Is your web service going to have a problem if you pass in duplicate data? In addition, do you really want to tie the availability of your system to the availability of the web service? So if the web server is down for a couple minutes, transactions on emp_table
would fail?
It would make drastically more sense architecturally to build an asynchronous process. In the simplest case, you'd insert the primary key from emp_table
into a separate table like emp_rows_to_be_processed
. A background job (presumably using dbms_scheduler
) would then periodically read that table, assemble the JSON, call the service, and then delete the row from emp_rows_to_be_processed
. That allows the database to continue functioning if the web service is down briefly. The background job will only see rows that were actually committed. It won't see duplicates. And it doesn't have any problems with mutating tables.
Upvotes: 1