We currently use a single search head with an index cluster. I have written a large number of relatively complex dashboards that utilize loadjob. I recently learned, on the documentation page for the loadjob command, it states that on a search cluster, loadjob cannot be run on ad hoc searches.
I have a few questions:
1) Do searches written as part of dashboard xml count as ad-hoc searches? Will I need to rewrite / change these when we move to a search head cluster?
2) Are there any workarounds I can utilize? I am concerned about dashboard efficiency without loadjob, as I am in many cases utilizing the same search multiple times (in some dashboards using the same results more than 5 times). I don't want to run the search over and over in the dashboard, and I don't want to schedule searches for every dashboard (much of the value to our business comes from the very up-to-date nature of our data, so I would have to have a lot very frequent scheduled searches, which does not seem ideal to me, although it is an option if I can keep each search relatively performant.)
First generic example of ways I am utilizing loadjob command. This example would all be in the context of a single dashboard. Below this example I have pasted a real example of a dashboard that I have created.
index=foo source=source1 field1=bar
| non-relevant SPL here
index=foo source=source2 [| loadjob $search1$ | stats values(field1) as field1]
| non-relevant SPL here
Here is an example of a real dashboard I have written in case this is helpful, with a few things redacted as needed. (I went a little nuts with the redactions in some places, apologies, but the point of my question is around the loadjob functionality which is all there).
index=def_mfg source=line_index production_area=--REDACTED-- earliest=-21d@d
| sort 0 -_time
| delta _time as timespan
| search event="running"
| eval timespan=-timespan
| transaction maxpause= --REDACTED--
| stats sum(timespan) as timespan values(source) as source by _time
| where timespan>= --REDACTED--
| sort 0 -_time
| streamstats count as index_number
| fields - timespan
300
index=def_mfg source=punch (employee_kronos_group=--REDACTED-- OR employee_kronos_group=--REDACTED--) earliest=-28d@d
| regex work_center="^(--)$"
| sort 0 -_time trigger_timing
| dedup record_trace_key
| where trigger_timing="after"
| table start source pwo work_center cwo
| rename start as _time
| append
[| loadjob $indexes_base$]
| append
[| loadjob $indexes_base$
| fields _time source
| eval _time=_time - --REDACTED--,
source="txn_start"]
| append
[| loadjob $indexes_base$
| fields _time source
| eval _time=_time - --REDACTED--,
source="txn_end"
| append
[| makeresults
| eval source="txn_end"]]
| eval pwo_work_center=pwo.",:,".cwo.",:,".work_center
| sort 0 -_time
| transaction startswith=eval(source="txn_start") endswith=eval(source="txn_end")
| fields - eventcount linecount joiner field_match_sum closed_txn timespan duration pwo work_center _raw source cwo
| mvexpand pwo_work_center
| rex field=pwo_work_center "(?.*),:,(?.*),:,S(?.*)"
| eval index_number=index_number - (--REDACTED-- - work_center)
| append
[| loadjob $indexes_base$
| fields - source
| rename _time as offline_time]
| stats mode(pwo) as pwo mode(cwo) as cwo values(offline_time) as offline_time by index_number
| where index_number>0 AND NOT (isnull(pwo) OR isnull(cwo) OR pwo="" OR cwo="")
index=def_mfg source=work_order (tag=--REDACTED-- OR tag=--REDACTED--) earliest=-50d@d
[| loadjob $index_base$
| stats values(pwo) as pwo]
| table _time cwo pwo status part_number description tag start
| sort 0 -_time
| dedup cwo
| stats values(eval(if(tag="--REDACTED--",status,null()))) as capsule_status values(eval(if(tag="--REDACTED--",status,null()))) as main_line_status values(eval(if(tag="--REDACTED--",start,null()))) as start by pwo
index=def_mfg source=punch (--REDACTED FILTER HERE--) earliest=-21d@d
[| loadjob $index_base$
| stats values(pwo) as pwo]
| regex work_center="^(--REDACTED--)$"
| sort 0 -_time trigger_timing
| dedup record_trace_key
| where trigger_timing="after"
| stats avg(start) as _time by pwo work_center
| sort 0 -_time
| dedup pwo
index=def_mfg (source=nonconformance OR source=incomplete_task OR source=part_not_installed) earliest=-35d@d
([| loadjob $index_base$
| stats values(pwo) as pwo])
| regex open_work_center="^(--REDACTED--)$"
| sort 0 -_time
| dedup issue_id
| search status="OPEN"
| stats count(eval(source="nonconformance")) as nonconformances count(eval(source="incomplete_task")) as incomplete_tasks count(eval(source="part_not_installed" AND reason="SHORT")) as shorts list(eval(if(source="part_not_installed" AND reason="SHORT",part_number,null()))) as shorts_list count(eval(source="part_not_installed" AND NOT reason="SHORT")) as parts_not_installed by pwo
| loadjob $index_base$
| append
[| loadjob $status_base$]
| append
[| loadjob $punch_base$]
| append
[| loadjob $defect_base$]
| stats values(start) as ml_start values(capsule_status) as capsule_status values(offline_time) as capsule_offline values(main_line_status) as main_line_status values(work_center) as last_punch_work_center values(_time) as last_punch_time values(nonconformances) as nonconformances values(incomplete_tasks) as incomplete_tasks values(shorts) as shorts values(parts_not_installed) as parts_not_installed list(shorts_list) as shorts_list by pwo
| where NOT (main_line_status >--REDACTED-- OR last_punch_time
Completed --REDACTED-- in Queue for --REDACTED--
| loadjob $panel_base$
| search (NOT last_punch_work_center=*) OR last_punch_work_center="" OR _work_center<--REDACTED-- OR (NOT main_line_status>=--REDACTED--)
| stats count
Open DPU on Queue --REDACTED--
| loadjob $panel_base$
| search (NOT last_punch_work_center=*) OR last_punch_work_center="" OR _work_center<--REDACTED-- OR (NOT main_line_status>=--REDACTED--)
| stats sum(nonconformances) as nonconformances sum(incomplete_tasks) as incomplete_tasks count as capsules
| eval dpu=(nonconformances+incomplete_tasks)/capsules
| fields dpu
--REDACTED-- Dropped on --REDACTED-- - Currently on --REDACTED--
| loadjob $panel_base$
| search _work_center>=--REDACTED--
| stats count
Open --REDACTED--
| loadjob $panel_base$
| search _work_center>=--REDACTED--
| stats sum(nonconformances) as nonconformances sum(incomplete_tasks) as incomplete_tasks count as capsules
| eval dpu=(nonconformances+incomplete_tasks)/capsules
| fields dpu
↧