-
Global information
- Generated on Wed Jan 7 15:00:41 2026
- Log file: /home/postgres/pg_data/data/pg_log/postgresql-2026-01-07_160000.log, ..., /home/postgres/pg_data/data/pg_log/postgresql-2026-01-07_164004.log
- Parsed 4,720,974 log entries in 1m40s
- Log start from 2026-01-07 16:00:00 to 2026-01-07 17:00:00
-
Overview
Global Stats
- 302 Number of unique normalized queries
- 466,655 Number of queries
- 1h33m47s Total query duration
- 2026-01-07 16:00:00 First query
- 2026-01-07 17:00:00 Last query
- 5,979 queries/s at 2026-01-07 16:15:04 Query peak
- 1h33m47s Total query duration
- 22s816ms Prepare/parse total duration
- 1m40s Bind total duration
- 1h31m44s Execute total duration
- 1 Number of events
- 1 Number of unique normalized events
- 1 Max number of times the same event was reported
- 0 Number of cancellation
- 39 Total number of automatic vacuums
- 57 Total number of automatic analyzes
- 636 Number temporary file
- 188.27 MiB Max size of temporary file
- 7.64 MiB Average size of temporary file
- 9,423 Total number of sessions
- 11 sessions at 2026-01-07 16:55:09 Session peak
- 2d10h42m38s Total duration of sessions
- 22s430ms Average duration of sessions
- 49 Average queries per session
- 597ms Average queries duration per session
- 21s832ms Average idle time per session
- 9,427 Total number of connections
- 73 connections/s at 2026-01-07 16:38:40 Connection peak
- 4 Total number of databases
SQL Traffic
Key values
- 5,979 queries/s Query Peak
- 2026-01-07 16:15:04 Date
SELECT Traffic
Key values
- 2,918 queries/s Query Peak
- 2026-01-07 16:15:04 Date
INSERT/UPDATE/DELETE Traffic
Key values
- 209 queries/s Query Peak
- 2026-01-07 16:00:58 Date
Queries duration
Key values
- 1h33m47s Total query duration
Prepared queries ratio
Key values
- 0.00 Ratio of bind vs prepare
- 0.00 % Ratio between prepared and "usual" statements
General Activity
↑ Back to the top of the General Activity tableDay Hour Count Min duration Max duration Avg duration Latency Percentile(90) Latency Percentile(95) Latency Percentile(99) Jan 07 16 466,653 0ms 20s821ms 11ms 3m57s 4m8s 4m22s 17 2 0ms 0ms 0ms 0ms 0ms 0ms Day Hour SELECT COPY TO Average Duration Latency Percentile(90) Latency Percentile(95) Latency Percentile(99) Jan 07 16 162,492 26 0ms 0ms 0ms 0ms 17 1 0 0ms 0ms 0ms 0ms Day Hour INSERT UPDATE DELETE COPY FROM Average Duration Latency Percentile(90) Latency Percentile(95) Latency Percentile(99) Jan 07 16 36,137 3,864 16 96 0ms 0ms 0ms 0ms 17 0 0 0 0 0ms 0ms 0ms 0ms Day Hour Prepare Bind Bind/Prepare Percentage of prepare Jan 07 16 59,909 196,250 3.28 27.74% 17 0 1 1.00 0.00% Day Hour Count Average / Second Jan 07 16 9,427 2.62/s 17 0 0.00/s Day Hour Count Average Duration Average idle time Jan 07 16 9,423 22s430ms 21s845ms 17 0 0ms 0ms -
Connections
Established Connections
Key values
- 73 connections Connection Peak
- 2026-01-07 16:38:40 Date
Connections per database
Key values
- acaweb_fx Main Database
- 9,427 connections Total
Connections per user
Key values
- postgres Main User
- 9,427 connections Total
Connections per host
Key values
- 192.168.0.74 Main host with 4130 connections
- 9,427 Total connections
Host Count 104.30.164.187 9 127.0.0.1 115 192.168.0.114 12 192.168.0.216 101 192.168.0.74 4,130 192.168.1.127 6 192.168.1.131 4 192.168.1.145 147 192.168.1.15 2,440 192.168.1.20 203 192.168.1.239 2 192.168.1.90 76 192.168.2.126 62 192.168.2.182 24 192.168.2.82 48 192.168.3.199 36 192.168.4.142 1,286 192.168.4.150 10 192.168.4.238 16 192.168.4.252 1 192.168.4.33 91 192.168.4.46 4 192.168.4.98 330 [local] 274 -
Sessions
Simultaneous sessions
Key values
- 11 sessions Session Peak
- 2026-01-07 16:55:09 Date
Histogram of session times
Key values
- 8,086 0-500ms duration
Sessions per database
Key values
- acaweb_fx Main Database
- 9,423 sessions Total
Sessions per user
Key values
- postgres Main User
- 9,423 sessions Total
Sessions per host
Key values
- 192.168.0.74 Main Host
- 9,423 sessions Total
Host Count Total Duration Average Duration 104.30.164.187 4 9s163ms 2s290ms 127.0.0.1 115 10s642ms 92ms 192.168.0.114 13 1h33m53s 7m13s 192.168.0.216 101 1m2s 620ms 192.168.0.74 4,130 6h58m3s 6s73ms 192.168.1.127 6 7s168ms 1s194ms 192.168.1.131 4 7h29m23s 1h52m20s 192.168.1.145 147 4h43m11s 1m55s 192.168.1.15 2,440 2h28m20s 3s647ms 192.168.1.20 203 14h48m32s 4m22s 192.168.1.239 2 12ms 6ms 192.168.1.90 76 34s905ms 459ms 192.168.2.126 62 6s587ms 106ms 192.168.2.182 24 5s415ms 225ms 192.168.2.82 48 31s56ms 647ms 192.168.3.199 36 1s310ms 36ms 192.168.4.142 1,286 17m37s 822ms 192.168.4.150 10 20h10m56s 2h1m5s 192.168.4.238 16 21s 1s312ms 192.168.4.252 1 205ms 205ms 192.168.4.33 91 5m42s 3s758ms 192.168.4.46 4 20s541ms 5s135ms 192.168.4.98 330 14s631ms 44ms [local] 274 3m11s 698ms -
Checkpoints / Restartpoints
Checkpoints Buffers
Key values
- 18,217 buffers Checkpoint Peak
- 2026-01-07 16:09:07 Date
- 209.910 seconds Highest write time
- 0.019 seconds Sync time
Checkpoints Wal files
Key values
- 9 files Wal files usage Peak
- 2026-01-07 16:09:07 Date
Checkpoints distance
Key values
- 263.81 Mo Distance Peak
- 2026-01-07 16:09:07 Date
Checkpoints Activity
↑ Back to the top of the Checkpoint Activity tableDay Hour Written buffers Write time Sync time Total time Jan 07 16 65,987 2,021.036s 0.064s 2,021.409s 17 0 0s 0s 0s Day Hour Added Removed Recycled Synced files Longest sync Average sync Jan 07 16 0 0 30 2,167 0.004s 0s 17 0 0 0 0 0s 0s Day Hour Count Avg time (sec) Jan 07 16 0 0s 17 0 0s Day Hour Mean distance Mean estimate Jan 07 16 40,398.17 kB 92,933.33 kB 17 0.00 kB 0.00 kB -
Temporary Files
Size of temporary files
Key values
- 184.03 MiB Temp Files size Peak
- 2026-01-07 16:20:07 Date
Number of temporary files
Key values
- 30 per second Temp Files Peak
- 2026-01-07 16:47:09 Date
Temporary Files Activity
↑ Back to the top of the Temporary Files Activity tableDay Hour Count Total size Average size Jan 07 16 636 4.74 GiB 7.64 MiB 17 0 0 0 Queries generating the most temporary files (N)
Rank Count Total size Min size Max size Avg size Query 1 33 1.65 GiB 3.48 MiB 188.27 MiB 51.26 MiB with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = ? ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = ? ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = ?) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, ?::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> ? ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = ?) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = ? where (ok.r is null or ok.r = ?) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = ?) and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > ? * ? and last.eventtimestamp > current_timestamp - interval ? and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval ?) and last.eventtimestamp > current_timestamp - interval ? and broker.r = ?;-
with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;
Date: 2026-01-07 16:50:05 Duration: 0ms
2 16 617.38 MiB 38.59 MiB 38.59 MiB 38.59 MiB update solr_relevance_old set new_hod_correct = sub.hod_correct, new_hod_percent = sub.hod_percent, new_hod_total = sub.hod_total, new_pattern_correct = sub.pattern_correct, new_pattern_percent = sub.pattern_percent, new_pattern_total = sub.pattern_total, new_percent = sub.percent, new_symbol_correct = sub.symbol_correct, new_symbol_percent = sub.symbol_percent, new_symbol_total = sub.symbol_total from ( select distinct resultuid, hod_correct, hod_percent, hod_total, hod, pattern_correct, pattern_percent, pattern_total, percent, symbol_correct, symbol_percent, symbol_total from whatshot_probability where type = ?) sub where result_uid = sub.resultuid;-
UPDATE solr_relevance_old SET new_hod_correct = sub.hod_correct, new_hod_percent = sub.hod_percent, new_hod_total = sub.hod_total, new_pattern_correct = sub.pattern_correct, new_pattern_percent = sub.pattern_percent, new_pattern_total = sub.pattern_total, new_percent = sub.percent, new_symbol_correct = sub.symbol_correct, new_symbol_percent = sub.symbol_percent, new_symbol_total = sub.symbol_total FROM ( select distinct resultuid, hod_correct, hod_percent, hod_total, hod, pattern_correct, pattern_percent, pattern_total, percent, symbol_correct, symbol_percent, symbol_total FROM whatshot_probability WHERE type = 'cp') sub WHERE result_uid = sub.resultuid;
Date: 2026-01-07 16:41:14 Duration: 0ms
3 16 1.11 GiB 70.83 MiB 70.84 MiB 70.83 MiB with max_ra as ( select resultuid from relevance_keylevels_results order by resultuid desc limit ?) update solr_relevance_old set newrelevant = sub.relevant, newage = sub.age from ( select so.uuid, case when ra.relevant is not null then ra.relevant when so.result_uid < max_ra.resultuid then ? else ? end as relevant, case when ra.age is not null then ra.age when so.result_uid < max_ra.resultuid then ? else ? end as age, so.result_uid from max_ra, solr_relevance_old so inner join keylevels_results k on so.result_uid = k.resultuid and so.uuid ilike ? inner join downloadersymbolsettings dss on k.symbolid = dss.symbolid left outer join relevance_keylevels_results ra on so.result_uid = ra.resultuid and so.uuid ilike ?) sub where solr_relevance_old.result_uid = sub.result_uid and solr_relevance_old.uuid ilike ?; update solr_relevance_old set newrelevant = ? where result_uid in ( select result_uid from solr_relevance_old s left outer join keylevels_results a on a.resultuid = s.result_uid where s.uuid ilike ? and a.resultuid is null); update solr_relevance_old set new_hod_correct = sub.hod_correct, new_hod_percent = sub.hod_percent, new_hod_total = sub.hod_total, new_pattern_correct = sub.pattern_correct, new_pattern_percent = sub.pattern_percent, new_pattern_total = sub.pattern_total, new_percent = sub.percent, new_symbol_correct = sub.symbol_correct, new_symbol_percent = sub.symbol_percent, new_symbol_total = sub.symbol_total from ( select distinct resultuid, hod_correct, hod_percent, hod_total, hod, pattern_correct, pattern_percent, pattern_total, percent, symbol_correct, symbol_percent, symbol_total from whatshot_probability where type in (...)) sub where result_uid = sub.resultuid;-
with max_ra as ( select resultuid from relevance_keylevels_results order by resultuid desc limit 1) update solr_relevance_old set newrelevant = sub.relevant, newage = sub.age from ( select so.uuid, case when ra.relevant is not null then ra.relevant when so.result_uid < max_ra.resultuid then 0 else 1 end as relevant, case when ra.age is not null then ra.age when so.result_uid < max_ra.resultuid then 11 else 0 end as age, so.result_uid from max_ra, solr_relevance_old so inner join keylevels_results k on so.result_uid = k.resultuid and so.uuid ilike 'kl_%' inner join downloadersymbolsettings dss on k.symbolid = dss.symbolid left outer join relevance_keylevels_results ra on so.result_uid = ra.resultuid and so.uuid ilike 'kl_%') sub where solr_relevance_old.result_uid = sub.result_uid and solr_relevance_old.uuid ilike 'kl_%'; update solr_relevance_old set newrelevant = 0 where result_uid in ( select result_uid from solr_relevance_old s left outer join keylevels_results a on a.resultuid = s.result_uid where s.uuid ilike 'kl_%' and a.resultuid is null); UPDATE solr_relevance_old SET new_hod_correct = sub.hod_correct, new_hod_percent = sub.hod_percent, new_hod_total = sub.hod_total, new_pattern_correct = sub.pattern_correct, new_pattern_percent = sub.pattern_percent, new_pattern_total = sub.pattern_total, new_percent = sub.percent, new_symbol_correct = sub.symbol_correct, new_symbol_percent = sub.symbol_percent, new_symbol_total = sub.symbol_total FROM ( select distinct resultuid, hod_correct, hod_percent, hod_total, hod, pattern_correct, pattern_percent, pattern_total, percent, symbol_correct, symbol_percent, symbol_total FROM whatshot_probability WHERE type in ('kl', 'ekl')) sub WHERE result_uid = sub.resultuid;
Date: 2026-01-07 16:41:16 Duration: 0ms
4 8 985.63 MiB 123.17 MiB 123.24 MiB 123.20 MiB select updateresultsmaterializedview ();-
select updateresultsmaterializedview ();
Date: 2026-01-07 16:47:13 Duration: 0ms
5 4 328.24 MiB 81.99 MiB 82.16 MiB 82.06 MiB select updateageforrelevantresults ();-
select updateageforrelevantresults ();
Date: 2026-01-07 16:47:05 Duration: 0ms
Queries generating the largest temporary files
Rank Size Query 1 188.27 MiB with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;[ Date: 2026-01-07 16:50:04 ]
2 174.14 MiB with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;[ Date: 2026-01-07 16:40:04 ]
3 123.24 MiB select updateresultsmaterializedview ();[ Date: 2026-01-07 16:47:13 ]
4 123.21 MiB select updateresultsmaterializedview ();[ Date: 2026-01-07 16:32:17 ]
5 123.21 MiB select updateresultsmaterializedview ();[ Date: 2026-01-07 16:17:15 ]
6 123.21 MiB select updateresultsmaterializedview ();[ Date: 2026-01-07 16:50:32 ]
7 123.21 MiB select updateresultsmaterializedview ();[ Date: 2026-01-07 16:35:32 ]
8 123.20 MiB select updateresultsmaterializedview ();[ Date: 2026-01-07 16:02:14 ]
9 123.19 MiB select updateresultsmaterializedview ();[ Date: 2026-01-07 16:20:32 ]
10 123.17 MiB select updateresultsmaterializedview ();[ Date: 2026-01-07 16:05:33 ]
11 114.48 MiB with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;[ Date: 2026-01-07 16:00:05 ]
12 110.95 MiB with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;[ Date: 2026-01-07 16:30:05 ]
13 109.29 MiB with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;[ Date: 2026-01-07 16:20:04 ]
14 101.84 MiB with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;[ Date: 2026-01-07 16:30:05 ]
15 94.19 MiB with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;[ Date: 2026-01-07 16:10:05 ]
16 82.16 MiB select updateageforrelevantresults ();[ Date: 2026-01-07 16:02:06 ]
17 82.06 MiB select updateageforrelevantresults ();[ Date: 2026-01-07 16:32:06 ]
18 82.04 MiB select updateageforrelevantresults ();[ Date: 2026-01-07 16:47:05 ]
19 81.99 MiB select updateageforrelevantresults ();[ Date: 2026-01-07 16:17:05 ]
20 75.25 MiB with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;[ Date: 2026-01-07 16:10:05 ]
-
Vacuums
Vacuums / Analyzes Distribution
Key values
- 0 sec Highest CPU-cost vacuum
Table
Database - Date
- 0 sec Highest CPU-cost analyze
Table
Database - Date
Analyzes per table
Key values
- public.solr_relevance_old (16) Main table analyzed (database acaweb_fx)
- 57 analyzes Total
Table Number of analyzes acaweb_fx.public.solr_relevance_old 16 acaweb_fx.public.datafeeds_latestrun 5 acaweb_fx.pg_catalog.pg_attribute 5 acaweb_fx.public.relevance_keylevels_results 4 acaweb_fx.pg_catalog.pg_class 4 acaweb_fx.public.relevance_fibonacci_results 4 acaweb_fx.public.relevance_autochartist_results 4 acaweb_fx.pg_catalog.pg_type 3 acaweb_fx.pg_catalog.pg_index 2 acaweb_fx.public.autochartist_symbolupdates 2 acaweb_fx.public.latest_t15_candle_view 2 acaweb_fx.public.solr_imports 1 acaweb_fx.pg_catalog.pg_depend 1 acaweb_fx.public.latest_candle_datetime_per_receng 1 acaweb_fx.public.patternresultsage 1 acaweb_fx.public.relevance_consecutivecandles_results 1 acaweb_fx.public.symbollatestupdatetime 1 Total 57 Vacuums per table
Key values
- public.solr_relevance_old (16) Main table vacuumed on database acaweb_fx
- 39 vacuums Total
Index Buffer usage Skipped WAL usage Table Vacuums scans hits misses dirtied pins frozen records full page bytes acaweb_fx.public.solr_relevance_old 16 16 14,125 0 52 0 0 8,771 1,131 5,785,866 acaweb_fx.public.datafeeds_latestrun 4 0 466 0 7 0 0 60 7 56,312 acaweb_fx.public.relevance_fibonacci_results 4 4 5,245 0 142 2 188 947 186 609,176 acaweb_fx.pg_toast.pg_toast_2619 2 2 322 0 92 0 0 238 83 338,818 acaweb_fx.public.relevance_keylevels_results 2 2 8,247 0 315 2 162 2,412 309 841,020 acaweb_fx.public.relevance_autochartist_results 2 2 7,001 0 213 0 452 1,779 207 523,798 acaweb_fx.pg_catalog.pg_type 1 1 128 0 25 0 0 49 16 108,806 acaweb_fx.public.autochartist_symbolupdates 1 1 25,973 0 5,338 3 37,256 9,023 5,534 2,353,538 acaweb_fx.pg_catalog.pg_statistic 1 1 1,004 0 97 0 582 390 80 351,542 acaweb_fx.pg_catalog.pg_attribute 1 1 797 0 179 0 67 363 139 819,773 acaweb_fx.public.symbollatestupdatetime 1 0 1,660 0 61 0 585 955 126 397,035 acaweb_fx.public.bigmovement_results_underlying 1 1 3,062 0 359 0 0 543 257 1,104,437 acaweb_fx.pg_catalog.pg_depend 1 1 391 0 64 0 59 161 57 313,228 acaweb_fx.public.latest_t15_candle_view 1 1 66 0 1 0 0 6 1 9,079 acaweb_fx.pg_catalog.pg_class 1 1 463 0 38 0 0 140 37 233,039 Total 39 34 68,950 50,737 6,983 7 39,351 25,837 8,170 13,845,467 Tuples removed per table
Key values
- public.symbollatestupdatetime (18177) Main table with removed tuples on database acaweb_fx
- 43118 tuples Total removed
Index Tuples Pages Table Vacuums scans removed remain not yet removable removed remain acaweb_fx.public.symbollatestupdatetime 1 0 18,177 88,336 509 0 1,714 acaweb_fx.public.solr_relevance_old 16 16 12,116 100,219 0 0 3,507 acaweb_fx.public.autochartist_symbolupdates 1 1 5,380 54,588 7 0 40,691 acaweb_fx.public.relevance_keylevels_results 2 2 2,305 23,972 0 0 558 acaweb_fx.public.relevance_autochartist_results 2 2 1,878 17,314 0 0 760 acaweb_fx.pg_catalog.pg_attribute 1 1 1,381 10,798 8 0 259 acaweb_fx.pg_catalog.pg_statistic 1 1 573 3,711 0 0 1,194 acaweb_fx.pg_catalog.pg_depend 1 1 337 14,647 0 0 135 acaweb_fx.public.relevance_fibonacci_results 4 4 291 6,448 0 0 408 acaweb_fx.public.datafeeds_latestrun 4 0 236 56 0 0 64 acaweb_fx.pg_toast.pg_toast_2619 2 2 146 340 2 0 104 acaweb_fx.pg_catalog.pg_class 1 1 143 1,656 7 0 150 acaweb_fx.pg_catalog.pg_type 1 1 90 1,446 0 0 38 acaweb_fx.public.latest_t15_candle_view 1 1 65 14 0 0 1 acaweb_fx.public.bigmovement_results_underlying 1 1 0 8,514 0 0 258 Total 39 34 43,118 332,059 533 0 49,841 Pages removed per table
Key values
- unknown (0) Main table with removed pages on database unknown
- 0 pages Total removed
Pages removed per tables
NO DATASET
Table Number of vacuums Index scans Tuples removed Pages removed acaweb_fx.pg_toast.pg_toast_2619 2 2 146 0 acaweb_fx.pg_catalog.pg_type 1 1 90 0 acaweb_fx.public.autochartist_symbolupdates 1 1 5380 0 acaweb_fx.public.datafeeds_latestrun 4 0 236 0 acaweb_fx.pg_catalog.pg_statistic 1 1 573 0 acaweb_fx.pg_catalog.pg_attribute 1 1 1381 0 acaweb_fx.public.symbollatestupdatetime 1 0 18177 0 acaweb_fx.public.bigmovement_results_underlying 1 1 0 0 acaweb_fx.pg_catalog.pg_depend 1 1 337 0 acaweb_fx.public.latest_t15_candle_view 1 1 65 0 acaweb_fx.public.relevance_keylevels_results 2 2 2305 0 acaweb_fx.pg_catalog.pg_class 1 1 143 0 acaweb_fx.public.solr_relevance_old 16 16 12116 0 acaweb_fx.public.relevance_autochartist_results 2 2 1878 0 acaweb_fx.public.relevance_fibonacci_results 4 4 291 0 Total 39 34 43,118 0 Autovacuum Activity
↑ Back to the top of the Autovacuum Activity tableDay Hour VACUUMs ANALYZEs Jan 07 16 39 57 17 0 0 - 0 sec Highest CPU-cost vacuum
-
Locks
Locks by types
Key values
- unknown Main Lock Type
- 0 locks Total
Most frequent waiting queries (N)
Rank Count Total time Min time Max time Avg duration Query NO DATASET
Queries that waited the most
Rank Wait time Query NO DATASET
-
Queries
Queries by type
Key values
- 162,493 Total read queries
- 53,445 Total write queries
Queries by database
Key values
- unknown Main database
- 465,607 Requests
- 1h31m44s (unknown)
- Main time consuming database
Database Request type Count Duration acaweb_fx Total 934 0ms copy from 80 0ms copy to 26 0ms cte 104 0ms ddl 16 0ms delete 16 0ms others 217 0ms select 102 0ms tcl 332 0ms update 41 0ms postgres Total 2 0ms others 2 0ms socialmedia Total 112 0ms others 10 0ms select 91 0ms tcl 11 0ms unknown Total 465,607 1h31m44s copy from 16 0ms cte 12,253 0ms insert 36,137 0ms others 17,160 0ms select 162,300 0ms tcl 590 0ms update 3,823 0ms Queries by user
Key values
- unknown Main user
- 465,607 Requests
User Request type Count Duration postgres Total 1,048 0ms copy from 80 0ms copy to 26 0ms cte 104 0ms ddl 16 0ms delete 16 0ms others 229 0ms select 193 0ms tcl 343 0ms update 41 0ms unknown Total 465,607 1h31m44s copy from 16 0ms cte 12,253 0ms insert 36,137 0ms others 17,160 0ms select 162,300 0ms tcl 590 0ms update 3,823 0ms Duration by user
Key values
- 1h31m44s (unknown) Main time consuming user
User Request type Count Duration postgres Total 1,048 0ms copy from 80 0ms copy to 26 0ms cte 104 0ms ddl 16 0ms delete 16 0ms others 229 0ms select 193 0ms tcl 343 0ms update 41 0ms unknown Total 465,607 1h31m44s copy from 16 0ms cte 12,253 0ms insert 36,137 0ms others 17,160 0ms select 162,300 0ms tcl 590 0ms update 3,823 0ms Queries by host
Key values
- unknown Main host
- 466,655 Requests
- 1h31m44s (unknown)
- Main time consuming host
Queries by application
Key values
- unknown Main application
- 466,257 Requests
- 1h31m44s (unknown)
- Main time consuming application
Application Request type Count Duration pgAdmin 4 - CONN:6655762 Total 1 0ms tcl 1 0ms pgAdmin 4 - CONN:8733126 Total 1 0ms others 1 0ms pgAdmin 4 - DB:acaweb_fx Total 2 0ms others 2 0ms pgAdmin 4 - DB:postgres Total 2 0ms others 2 0ms pgAdmin 4 - DB:socialmedia Total 3 0ms others 3 0ms psql Total 389 0ms copy from 80 0ms copy to 26 0ms cte 104 0ms ddl 16 0ms delete 16 0ms others 4 0ms select 102 0ms update 41 0ms unknown Total 466,257 1h31m44s copy from 16 0ms cte 12,253 0ms insert 36,137 0ms others 17,377 0ms select 162,391 0ms tcl 932 0ms update 3,823 0ms Number of cancelled queries
Key values
- 0 per second Cancelled query Peak
- 2026-01-07 16:42:48 Date
Number of cancelled queries (5 minutes period)
NO DATASET
-
Top Queries
Histogram of query times
Key values
- 157,977 0-1ms duration
Slowest individual queries
Rank Duration Query NO DATASET
Time consuming queries
Rank Total duration Times executed Min duration Max duration Avg duration Query 1 0ms 3 0ms 0ms 0ms insert into t30 (symbolid, pricedatetime, open, high, low, close, volume, bsf, sastdatetimereceived) values (?, ?::timestamp without time zone, ?.?, ?.?, ?.?, ?, ?, ?, ?::timestamp without time zone) on conflict (symbolid, pricedatetime) do nothing;Times Reported Time consuming queries #1
Day Hour Count Duration Avg duration Jan 07 16 3 0ms 0ms 2 0ms 37 0ms 0ms 0ms select key, value from datasources ds inner join datasourceparams dsp on ds.id = dsp.datasourceid where ds.name = ?;Times Reported Time consuming queries #2
Day Hour Count Duration Avg duration Jan 07 16 37 0ms 0ms 3 0ms 31 0ms 0ms 0ms with rar_max as ( select resultuid from relevance_bigmovement_results order by resultuid desc limit ? ) select bmr.symbolid, patternstarttime, patternendtime, timegranularity, ? as direction, case when bmr.old_resultuid = ? then bmr.old_resultuid else bmr.resultuid end as uid, s.exchange, s.symbol, s.longname, s.shortname, dtt.timezone, bmr.patternmovement, bmr.statisticalmovement, bmr.fromprice, bmr.toprice, bmr.percentile, bmr.patternlengthbars, case when rbr.age is not null then rbr.age when bmr.resultuid <= rm.resultuid then ? else ? end as age, case when rbr.relevant is not null then rbr.relevant when bmr.resultuid <= rm.resultuid then ? else ? end as relevant, cps.pip from bigmovement_results bmr inner join downloadersymbolsettings dss on bmr.symbolid = dss.symbolid inner join datafeedstimetable dtt on dss.classname = dtt.classname inner join symbols s on bmr.symbolid = s.symbolid inner join rar_max rm on ? = ? left outer join relevance_bigmovement_results rbr on rbr.resultuid = bmr.resultuid left join currencypips cps on cps.symbol = s.symbol where (bmr.old_resultuid = ? or bmr.resultuid = ?) and dtt.dayofweek = ?;Times Reported Time consuming queries #3
Day Hour Count Duration Avg duration Jan 07 16 31 0ms 0ms 4 0ms 2,239 0ms 0ms 0ms insert into t60 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) on conflict (pricedatetime, symbolid) do update set open = ?, high = ?, low = ?, close = ?, volume = ?, bsf = ?, sastdatetimewritten = ?, sastdatetimereceived = ?;Times Reported Time consuming queries #4
Day Hour Count Duration Avg duration Jan 07 16 2,239 0ms 0ms 5 0ms 48 0ms 0ms 0ms select count(*) from datafeeds_latestrun where feedname ilike ? and ((latestrxtime > current_timestamp - interval ? and latestdbwritetime > current_timestamp - interval ?) or (latestdbwritetime > current_timestamp - interval ? and lateststartuptime > current_timestamp - interval ?));Times Reported Time consuming queries #5
Day Hour Count Duration Avg duration Jan 07 16 48 0ms 0ms 6 0ms 4 0ms 0ms 0ms select updaterelevantforrelevantresults ();Times Reported Time consuming queries #6
Day Hour Count Duration Avg duration Jan 07 16 4 0ms 0ms 7 0ms 6 0ms 0ms 0ms set datestyle = iso;Times Reported Time consuming queries #7
Day Hour Count Duration Avg duration Jan 07 16 6 0ms 0ms 8 0ms 6 0ms 0ms 0ms set client_encoding to ?;Times Reported Time consuming queries #8
Day Hour Count Duration Avg duration Jan 07 16 6 0ms 0ms 9 0ms 1 0ms 0ms 0ms select "public"."executions"."id" AS "id", "public"."executions"."processid" AS "processid", "public"."executions"."executiondate" AS "executiondate", "public"."executions"."errorcount" AS "errorcount", "public"."executions"."warningcount" AS "warningcount", "public"."executions"."isrunning" AS "isrunning", "public"."executions"."response" AS "response", "public"."executions"."live" AS "live", "public"."executions"."has_results" AS "has_results", "LT?"."id" AS "LA?" from "public"."executions" left outer join "public"."processes" "LT?" on "LT?"."id" = "public"."executions"."processid" where (processid = ?) order by "public"."executions"."id" desc limit ? offset ?;Times Reported Time consuming queries #9
Day Hour Count Duration Avg duration Jan 07 16 1 0ms 0ms 10 0ms 18 0ms 0ms 0ms select cast(count(*) / cast(setting as numeric) * ? as int) from pg_stat_activity, pg_settings where name = ? group by setting;Times Reported Time consuming queries #10
Day Hour Count Duration Avg duration Jan 07 16 18 0ms 0ms 11 0ms 1 0ms 0ms 0ms select count(*) from "public"."executions" left outer join "public"."processes" "LT?" on "LT?"."id" = "public"."executions"."processid" where (processid = ?);Times Reported Time consuming queries #11
Day Hour Count Duration Avg duration Jan 07 16 1 0ms 0ms 12 0ms 1 0ms 0ms 0ms select pricedatetime, open, high, low, close, volume, symbolid, symbol, sastdatetimewritten, interval, spike_threshold, classname from candle_spikes where classname = ? and interval = ? and symbol in (...) and gap_check = ? and pricedatetime between ? and ? and ( select count(*) from t15 where symbolid = candle_spikes.symbolid and pricedatetime between ? and ?) = ? order by pricedatetime desc limit ?;Times Reported Time consuming queries #12
Day Hour Count Duration Avg duration Jan 07 16 1 0ms 0ms 13 0ms 466 0ms 0ms 0ms commit;Times Reported Time consuming queries #13
Day Hour Count Duration Avg duration Jan 07 16 466 0ms 0ms 14 0ms 400 0ms 0ms 0ms with rar_max as ( select resultuid from relevance_keylevels_results order by resultuid desc limit ? ), kr as ( select a.*, rr.age, rr.relevant from keylevels_results a left outer join relevance_keylevels_results rr on a.resultuid = rr.resultuid where case when false = ? then true else a.resultuid > ( select min(resultuid) from relevance_keylevels_results) end ), all_results as ( select kr.resultuid as resultuid, kr.direction as direction, s.exchange as exchange, s.symbolid as symbolid, coalesce(bim.code, s.symbol) as symbol_code, s.longname as symbol_name, s.timegranularity as interval, p.patternname as pattern_name, kr.breakout as breakout, kr.atbaridentified as identified, dtt.timezone as timezone, kr.patternlengthbars as length, g.basegroupname, newlevels.filtered, case when kr.age is not null then kr.age when kr.resultuid <= rm.resultuid then ? else ? end as age, case when kr.relevant is not null then kr.relevant when kr.resultuid <= rm.resultuid then ? else ? end as relevant, cps.pip from kr inner join brokersymbollist bsl on bsl.brokerid = ? and bsl.symbolid = kr.symbolid inner join symbols s on bsl.symbolid = s.symbolid and s.nonliquid = ? inner join symbolgroup sg on s.symbolid = sg.symbolid inner join groups g on sg.groupid = g.groupid inner join brokergroups bg on g.groupid = bg.groupid and bsl.brokerid = bg.brokerid inner join hrspatterns p on kr.patternid = p.patternid inner join downloadersymbolsettings dss on s.symbolid = dss.symbolid inner join datafeedstimetable dtt on dss.classname = dtt.classname and dtt.dayofweek = ? inner join rar_max rm on ? = ? left outer join autochartist_symbolupdates au on dss.symbolid = au.symbolid left outer join relevance_keylevels_results rar on rar.resultuid = kr.resultuid left join lateral calc_kl_signal_filter (kr.resultuid) newlevels on true left join currencypips cps on cps.symbol = s.symbol left outer join brokerinstrumentmap bim on dss.datafeedinstrumentid = bim.datafeedinstrumentid and bim.brokerid = bsl.brokerid and bim.type = ? where kr.gmttimefound > now() - interval ? and dss.enabled = ? and s.deleted = ? and (kr.simulation = ? or kr.simulation is null) and (? = ? or s.timegranularity in (...)) and (? = ? or s.exchange in (...)) and (? = ? or coalesce(bim.code, s.symbol) in (...)) and (? = ? or p.patternname in (...)) and (? = ? or kr.patternclassid in (...)) and (? = ? or kr.patternlengthbars <= ?) and kr.patternstarttime::timestamp without time zone >= coalesce(au.earliestpricedatetime, ?::timestamp without time zone) -- to make sure patternstarttime is in our t-tables ), results as ( select distinct on (symbolid) * from all_results where (false = ? or relevant = ?) and (? = ? or age <= ?) order by symbolid, resultuid ) select * from results order by identified desc, length desc limit ?;Times Reported Time consuming queries #14
Day Hour Count Duration Avg duration Jan 07 16 400 0ms 0ms 15 0ms 240 0ms 0ms 0ms select count(*), sum(size), extract(epoch from now() - min(modification)) from pg_ls_waldir ();Times Reported Time consuming queries #15
Day Hour Count Duration Avg duration Jan 07 16 240 0ms 0ms 16 0ms 240 0ms 0ms 0ms select system_identifier from pg_control_system ();Times Reported Time consuming queries #16
Day Hour Count Duration Avg duration Jan 07 16 240 0ms 0ms 17 0ms 9 0ms 0ms 0ms select groupid, exchange, groupname, symbol, longname from prfsymboltree where brokerid = ? order by groupname, symbol;Times Reported Time consuming queries #17
Day Hour Count Duration Avg duration Jan 07 16 9 0ms 0ms 18 0ms 5 0ms 0ms 0ms insert into t15 (symbolid, pricedatetime, open, high, low, close, volume, bsf, sastdatetimereceived) values (?, ?::timestamp without time zone, ?, ?.?, ?.?, ?.?, ?, ?, ?::timestamp without time zone) on conflict (symbolid, pricedatetime) do nothing;Times Reported Time consuming queries #18
Day Hour Count Duration Avg duration Jan 07 16 5 0ms 0ms 19 0ms 1 0ms 0ms 0ms select * from processresults limit ?;Times Reported Time consuming queries #19
Day Hour Count Duration Avg duration Jan 07 16 1 0ms 0ms 20 0ms 8 0ms 0ms 0ms select updatedatafeedslatestrun (?);Times Reported Time consuming queries #20
Day Hour Count Duration Avg duration Jan 07 16 8 0ms 0ms Most frequent queries (N)
Rank Times executed Total duration Min duration Max duration Avg duration Query 1 75,018 0ms 0ms 0ms 0ms select ?;Times Reported Time consuming queries #1
Day Hour Count Duration Avg duration Jan 07 16 75,017 0ms 0ms 17 1 0ms 0ms 2 54,990 0ms 0ms 0ms 0ms select distinct on (coalesce(bim.code, s.symbol) , s.exchange, s.timegranularity, df.timezone) s.symbolid as id, coalesce(bim.code, s.symbol) as name, s.symbol as symbol, dss.downloadersymbol as ticker, s.exchange as exchange, s.timegranularity as interval, df.timezone as timezone from symbols s inner join downloadersymbolsettings dss on dss.symbolid = s.symbolid inner join datafeedstimetable df on df.classname ilike dss.classname left join brokersymbollist bsl on bsl.brokerid = ? and bsl.symbolid = s.symbolid left outer join brokerinstrumentmap bim on dss.datafeedinstrumentid = bim.datafeedinstrumentid and bim.brokerid = ? and bim.type = ? where s.symbolid = ?;Times Reported Time consuming queries #2
Day Hour Count Duration Avg duration Jan 07 16 54,990 0ms 0ms 3 13,158 0ms 0ms 0ms 0ms select s.symbolid as id, s.symbol as name, s.exchange as exchange, s.timegranularity as interval, dtt.timezone as timezone from symbols s inner join downloadersymbolsettings dss on dss.symbolid = s.symbolid inner join datafeedstimetable dtt on dss.classname = dtt.classname and dtt.dayofweek = ? inner join brokersymbollist bsl on bsl.symbolid = s.symbolid where bsl.brokerid = ? and (? = ? or s.timegranularity = ?) and (s.symbol = ? or dss.downloadersymbol = ?) and dss.enabled = ?;Times Reported Time consuming queries #3
Day Hour Count Duration Avg duration Jan 07 16 13,158 0ms 0ms 4 8,340 0ms 0ms 0ms 0ms insert into executionlogs (executionid, status, message, details, detailtype) values (null, ?, ?, null, null);Times Reported Time consuming queries #4
Day Hour Count Duration Avg duration Jan 07 16 8,340 0ms 0ms 5 8,291 0ms 0ms 0ms 0ms set extra_float_digits = ?;Times Reported Time consuming queries #5
Day Hour Count Duration Avg duration Jan 07 16 8,291 0ms 0ms 6 8,265 0ms 0ms 0ms 0ms set application_name = ?;Times Reported Time consuming queries #6
Day Hour Count Duration Avg duration Jan 07 16 8,265 0ms 0ms 7 6,714 0ms 0ms 0ms 0ms insert into autochartist_results (resultid, symbolid, bandwidth, pattern, qtytp, gmttimefound, direction, initialtrend, breakout, volumeincrease, noise, symmetry, predictionpricefrom, predictionpriceto, predictiontimefrom, predictiontimeto, patternstarttime, patternendtime, patternstartprice, patternendprice, resx0, resx1, supportx0, supportx1, resy0, resy1, supporty0, supporty1, supportgradient, resgradient, riskreward, patternquality, trendchange, maxmovementafterbreakout, latestbaratbreakouttime, latestbaratbreakoutprice, patternlengthbars, temporarypattern, relevancestartdistance, simulation, writtendatetime) values (?, ?, ?.?, ?, ?, ?::timestamp without time zone, ?, ?.?, ?.?, ?.?, ?.?, ?.?, ?.?, ?.?, ?::timestamp without time zone, ?::timestamp without time zone, ?::timestamp without time zone, ?::timestamp without time zone, ?.?, ?.?, ?::timestamp without time zone, ?::timestamp without time zone, ?::timestamp without time zone, ?::timestamp without time zone, ?.?, ?.?, ?.?, ?.?, ?.?, ?.?, ?.?, ?.?, ?, ?.?, ?::timestamp without time zone, ?.?, ?, ?, ?.?, ?, current_timestamp::timestamp without time zone) on conflict do nothing;Times Reported Time consuming queries #7
Day Hour Count Duration Avg duration Jan 07 16 6,714 0ms 0ms 8 6,230 0ms 0ms 0ms 0ms with rar_max as ( select resultuid from relevance_autochartist_results order by resultuid desc limit ? ) select a.symbolid, pattern, patternid, resy0, resy1, resx0, resx1, supporty0, supporty1, supportx0, supportx1, predictiontimeto, patternstarttime, timegranularity, patternendtime, direction, trendchange, patternlengthbars, patternquality, case when a.old_resultuid = ? then a.old_resultuid else a.resultuid end as uid, breakout, initialtrend, volumeincrease, symmetry as uniformity, predictionpricefrom, predictionpriceto, noise, s.exchange, s.symbol, s.longname, s.shortname, breakout, dtt.timezone, patternstartprice, patternendprice, qtytp, newlevels.profit, newlevels.stop, newlevels.filtered, case when rar.age is not null then rar.age when a.resultuid <= rm.resultuid then ? else ? end as age, case when rar.relevant is not null then rar.relevant when a.resultuid <= rm.resultuid then ? else ? end as relevant, cps.pip from autochartist_results a inner join downloadersymbolsettings dss on a.symbolid = dss.symbolid inner join datafeedstimetable dtt on dss.classname = dtt.classname inner join symbols s on a.symbolid = s.symbolid inner join patterns p on p.patternname = a.pattern inner join rar_max rm on ? = ? left outer join relevance_autochartist_results rar on rar.resultuid = a.resultuid left join lateral calc_cp_signal (a.resultuid) newlevels on true left join currencypips cps on cps.symbol = s.symbol where (a.old_resultuid = ? or a.resultuid = ?) and dtt.dayofweek = ?;Times Reported Time consuming queries #8
Day Hour Count Duration Avg duration Jan 07 16 6,230 0ms 0ms 9 5,750 0ms 0ms 0ms 0ms insert into t15 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) on conflict (pricedatetime, symbolid) do update set open = ?, high = ?, low = ?, close = ?, volume = ?, bsf = ?, sastdatetimewritten = ?, sastdatetimereceived = ?;Times Reported Time consuming queries #9
Day Hour Count Duration Avg duration Jan 07 16 5,750 0ms 0ms 10 3,746 0ms 0ms 0ms 0ms insert into fibonacci_results (bandwidth, pattern, gmttimefound, direction, patternstarttime, patternendtime, patternstartprice, patternendprice, qtytp, pricex, timex, pricea, timea, priceb, timeb, pricec, timec, priced, timed, averagequality, timequality, errormargin, patternlengthbars, target10, target06, target16, target07, target12, target05, target03, symbolid, noise, ratiosfound, temporarypattern, uniqueindex, completed, simulation, writtendatetime) values (?.?, ?, ?::timestamp without time zone, ?, ?::timestamp without time zone, ?::timestamp without time zone, ?.?, ?.?, ?, ?.?, ?::timestamp without time zone, ?.?, ?::timestamp without time zone, ?.?, ?::timestamp without time zone, ?.?, ?::timestamp without time zone, ?.?, ?::timestamp without time zone, ?.?, ?.?, ?.?, ?, ?.?, ?.?, ?.?, ?.?, ?.?, ?.?, ?.?, ?, ?.?, ?, ?, ?, ?, ?, current_timestamp::timestamp without time zone) on conflict do nothing;Times Reported Time consuming queries #10
Day Hour Count Duration Avg duration Jan 07 16 3,746 0ms 0ms 11 3,358 0ms 0ms 0ms 0ms insert into t30 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) on conflict (pricedatetime, symbolid) do update set open = ?, high = ?, low = ?, close = ?, volume = ?, bsf = ?, sastdatetimewritten = ?, sastdatetimereceived = ?;Times Reported Time consuming queries #11
Day Hour Count Duration Avg duration Jan 07 16 3,358 0ms 0ms 12 3,235 0ms 0ms 0ms 0ms update patternresultsrelevance set relevant = ?, saxo_relevant = ?, notrelevantpricedatetime = ?, reason = ? where uniqueindex = ? and relevant = ?;Times Reported Time consuming queries #12
Day Hour Count Duration Avg duration Jan 07 16 3,235 0ms 0ms 13 3,202 0ms 0ms 0ms 0ms insert into keylevels_results (bandwidth, breakout, patternid, gmttimefound, approachingtimestamp, approachingregion, qtytp, patternlengthbars, patternprice, x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, breakoutbars, breakoutprice, patternendtime, atbaridentified, atpriceidentified, errormargin, direction, symbolid, predictionpricefrom, predictionpriceto, predictiontimefrom, predictiontimebars, uniquepointsvalue, furthestprice, relevancestartdistance, patternclassid, patternstarttime, stoplosslevel, simulation, writtendatetime) values (?.?, ?, ?, ?::timestamp without time zone, ?, ?.?, ?, ?, ?.?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?.?, ?::timestamp without time zone, ?, ?.?, ?.?, ?, ?, ?.?, ?.?, ?::timestamp without time zone, ?, ?, ?.?, ?.?, ?, ?, ?.?, ?, current_timestamp::timestamp without time zone) on conflict do nothing;Times Reported Time consuming queries #13
Day Hour Count Duration Avg duration Jan 07 16 3,202 0ms 0ms 14 2,997 0ms 0ms 0ms 0ms with rar_max as ( select resultuid from relevance_keylevels_results order by resultuid desc limit ? ) select case when a.old_resultuid = ? then a.old_resultuid else a.resultuid end as ruid, s.symbolid as sid, s.symbol as sym, longname, shortname, exchange as e, timegranularity as tg, a.patternid as pid, a.direction as d, a.patternprice as pp, atbaridentified as pet, case when (x9 != ?) then x9 when (x8 != ?) then x8 when (x7 != ?) then x7 when (x6 != ?) then x6 when (x5 != ?) then x5 when (x4 != ?) then x4 when (x3 != ?) then x3 when (x2 != ?) then x2 end as pst, patternprice as patp, x0, x1, x2, case when (x3 != ?) then x3 else ? end as x3, case when (x4 != ?) then x4 else ? end as x4, case when (x5 != ?) then x5 else ? end as x5, case when (x6 != ?) then x6 else ? end as x6, case when (x7 != ?) then x7 else ? end as x7, case when (x8 != ?) then x8 else ? end as x8, errormargin as erm, breakoutprice as pe, breakoutbars as be, breakout, atbaridentified as atbar, atpriceidentified as atprice, patternlengthbars as l, bandwidth as bw, qtytp as qtp, p.patternname as patternname, dtt.absolutetimezoneoffset as tzos, dtt.timezone as timezone, approachingtimestamp as apt, approachingregion as apr, predictionpricefrom as ppf, predictionpriceto as ppt, predictiontimefrom as ptf, predictiontimebars as ptb, furthestprice as fp, newlevels.filtered, a.uniquepointsvalue as upv, case when rar.age is not null then rar.age when a.resultuid <= rm.resultuid then ? else ? end as age, case when rar.relevant is not null then rar.relevant when a.resultuid <= rm.resultuid then ? else ? end as relevant, cps.pip from keylevels_results a inner join downloadersymbolsettings dss on a.symbolid = dss.symbolid inner join datafeedstimetable dtt on dss.classname = dtt.classname inner join symbols s on a.symbolid = s.symbolid inner join hrspatterns p on a.patternid = p.patternid inner join rar_max rm on ? = ? left outer join relevance_keylevels_results rar on a.resultuid = rar.resultuid left join lateral calc_kl_signal_filter (a.resultuid) newlevels on true left join currencypips cps on cps.symbol = s.symbol where (a.old_resultuid = ? or a.resultuid = ?) and dtt.dayofweek = ?;Times Reported Time consuming queries #14
Day Hour Count Duration Avg duration Jan 07 16 2,997 0ms 0ms 15 2,239 0ms 0ms 0ms 0ms insert into t60 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) on conflict (pricedatetime, symbolid) do update set open = ?, high = ?, low = ?, close = ?, volume = ?, bsf = ?, sastdatetimewritten = ?, sastdatetimereceived = ?;Times Reported Time consuming queries #15
Day Hour Count Duration Avg duration Jan 07 16 2,239 0ms 0ms 16 1,408 0ms 0ms 0ms 0ms select category, name, sum(total) as total, sum(correct) as correct, (cast(sum(correct) as float) / cast(sum(total) as float)) * ?.? as percentage, min("from") AS "from", max("to") AS "to" from ( select category, name, total, correct, percentage, "from", "to" from stats_summary where statsid = ? and category = lower(?) union select category, name, total, correct, percentage, "from", "to" from stats_hrs_summary where statsid = ? and category = lower(?) order by correct desc) as summdata group by category, name having sum(total) > ? order by name;Times Reported Time consuming queries #16
Day Hour Count Duration Avg duration Jan 07 16 1,408 0ms 0ms 17 1,080 0ms 0ms 0ms 0ms insert into t240 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) on conflict (pricedatetime, symbolid) do update set open = ?, high = ?, low = ?, close = ?, volume = ?, bsf = ?, sastdatetimewritten = ?, sastdatetimereceived = ?;Times Reported Time consuming queries #17
Day Hour Count Duration Avg duration Jan 07 16 1,080 0ms 0ms 18 1,061 0ms 0ms 0ms 0ms select symbolid, pricedatetime, classname, downloadfrequency, downloadersymbol, open, high, low, close, volume, bsf, sastdatetimereceived from ( select pricedatetime, dss.classname, dss.downloadfrequency, dss.symbolid, dss.downloadersymbol, t.open, t.high, t.low, t.close, t.volume, t.bsf, t.sastdatetimereceived, row_number() over (partition by t.symbolid order by t.pricedatetime desc) as rn from t15 t, downloadersymbolsettings dss, symbols s where dss.classname = ? and dss.downloadfrequency = ? and dss.symbolid = t.symbolid and s.symbolid = dss.symbolid and dss.enabled = ? and s.deleted = ? and dss.downloadersymbol in (...) and t.pricedatetime > now() - interval ?) as ranked_candles_table where rn = ?;Times Reported Time consuming queries #18
Day Hour Count Duration Avg duration Jan 07 16 1,061 0ms 0ms 19 909 0ms 0ms 0ms 0ms select t.pricedatetime as pricedatetime, t.open as open, t.high as high, t.low as low, t.close "..." close, t.volume as volume, t.bsf as bsf from t60 t where t.symbolid = ? and (bsf = ? or bsf is null) and pricedatetime >= ? and pricedatetime <= ? order by pricedatetime desc limit ?;Times Reported Time consuming queries #19
Day Hour Count Duration Avg duration Jan 07 16 909 0ms 0ms 20 836 0ms 0ms 0ms 0ms select case when a.old_resultuid = ? then a.old_resultuid else a.resultuid end as resultuid, s.symbol, pattern as patternname, timegranularity as interval, patternlengthbars as length, patternendtime, direction, breakout, predictiontimeto, predictionpricefrom, predictionpriceto, patternstartprice, resy1, supporty1, dtt.timezone, cps.pip, newlevels.profit from autochartist_results a inner join downloadersymbolsettings dss on a.symbolid = dss.symbolid inner join datafeedstimetable dtt on dss.classname = dtt.classname inner join symbols s on a.symbolid = s.symbolid inner join patterns p on p.patternname = a.pattern left join currencypips cps on cps.symbol = s.symbol left join lateral calc_cp_signal (a.resultuid) newlevels on true where (a.old_resultuid = ? or a.resultuid = ?) and dtt.dayofweek = ?;Times Reported Time consuming queries #20
Day Hour Count Duration Avg duration Jan 07 16 836 0ms 0ms Normalized slowest queries (N)
Rank Min duration Max duration Avg duration Times executed Total duration Query 1 0ms 0ms 0ms 3 0ms insert into t30 (symbolid, pricedatetime, open, high, low, close, volume, bsf, sastdatetimereceived) values (?, ?::timestamp without time zone, ?.?, ?.?, ?.?, ?, ?, ?, ?::timestamp without time zone) on conflict (symbolid, pricedatetime) do nothing;Times Reported Time consuming queries #1
Day Hour Count Duration Avg duration Jan 07 16 3 0ms 0ms 2 0ms 0ms 0ms 37 0ms select key, value from datasources ds inner join datasourceparams dsp on ds.id = dsp.datasourceid where ds.name = ?;Times Reported Time consuming queries #2
Day Hour Count Duration Avg duration Jan 07 16 37 0ms 0ms 3 0ms 0ms 0ms 31 0ms with rar_max as ( select resultuid from relevance_bigmovement_results order by resultuid desc limit ? ) select bmr.symbolid, patternstarttime, patternendtime, timegranularity, ? as direction, case when bmr.old_resultuid = ? then bmr.old_resultuid else bmr.resultuid end as uid, s.exchange, s.symbol, s.longname, s.shortname, dtt.timezone, bmr.patternmovement, bmr.statisticalmovement, bmr.fromprice, bmr.toprice, bmr.percentile, bmr.patternlengthbars, case when rbr.age is not null then rbr.age when bmr.resultuid <= rm.resultuid then ? else ? end as age, case when rbr.relevant is not null then rbr.relevant when bmr.resultuid <= rm.resultuid then ? else ? end as relevant, cps.pip from bigmovement_results bmr inner join downloadersymbolsettings dss on bmr.symbolid = dss.symbolid inner join datafeedstimetable dtt on dss.classname = dtt.classname inner join symbols s on bmr.symbolid = s.symbolid inner join rar_max rm on ? = ? left outer join relevance_bigmovement_results rbr on rbr.resultuid = bmr.resultuid left join currencypips cps on cps.symbol = s.symbol where (bmr.old_resultuid = ? or bmr.resultuid = ?) and dtt.dayofweek = ?;Times Reported Time consuming queries #3
Day Hour Count Duration Avg duration Jan 07 16 31 0ms 0ms 4 0ms 0ms 0ms 2,239 0ms insert into t60 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?) on conflict (pricedatetime, symbolid) do update set open = ?, high = ?, low = ?, close = ?, volume = ?, bsf = ?, sastdatetimewritten = ?, sastdatetimereceived = ?;Times Reported Time consuming queries #4
Day Hour Count Duration Avg duration Jan 07 16 2,239 0ms 0ms 5 0ms 0ms 0ms 48 0ms select count(*) from datafeeds_latestrun where feedname ilike ? and ((latestrxtime > current_timestamp - interval ? and latestdbwritetime > current_timestamp - interval ?) or (latestdbwritetime > current_timestamp - interval ? and lateststartuptime > current_timestamp - interval ?));Times Reported Time consuming queries #5
Day Hour Count Duration Avg duration Jan 07 16 48 0ms 0ms 6 0ms 0ms 0ms 4 0ms select updaterelevantforrelevantresults ();Times Reported Time consuming queries #6
Day Hour Count Duration Avg duration Jan 07 16 4 0ms 0ms 7 0ms 0ms 0ms 6 0ms set datestyle = iso;Times Reported Time consuming queries #7
Day Hour Count Duration Avg duration Jan 07 16 6 0ms 0ms 8 0ms 0ms 0ms 6 0ms set client_encoding to ?;Times Reported Time consuming queries #8
Day Hour Count Duration Avg duration Jan 07 16 6 0ms 0ms 9 0ms 0ms 0ms 1 0ms select "public"."executions"."id" AS "id", "public"."executions"."processid" AS "processid", "public"."executions"."executiondate" AS "executiondate", "public"."executions"."errorcount" AS "errorcount", "public"."executions"."warningcount" AS "warningcount", "public"."executions"."isrunning" AS "isrunning", "public"."executions"."response" AS "response", "public"."executions"."live" AS "live", "public"."executions"."has_results" AS "has_results", "LT?"."id" AS "LA?" from "public"."executions" left outer join "public"."processes" "LT?" on "LT?"."id" = "public"."executions"."processid" where (processid = ?) order by "public"."executions"."id" desc limit ? offset ?;Times Reported Time consuming queries #9
Day Hour Count Duration Avg duration Jan 07 16 1 0ms 0ms 10 0ms 0ms 0ms 18 0ms select cast(count(*) / cast(setting as numeric) * ? as int) from pg_stat_activity, pg_settings where name = ? group by setting;Times Reported Time consuming queries #10
Day Hour Count Duration Avg duration Jan 07 16 18 0ms 0ms 11 0ms 0ms 0ms 1 0ms select count(*) from "public"."executions" left outer join "public"."processes" "LT?" on "LT?"."id" = "public"."executions"."processid" where (processid = ?);Times Reported Time consuming queries #11
Day Hour Count Duration Avg duration Jan 07 16 1 0ms 0ms 12 0ms 0ms 0ms 1 0ms select pricedatetime, open, high, low, close, volume, symbolid, symbol, sastdatetimewritten, interval, spike_threshold, classname from candle_spikes where classname = ? and interval = ? and symbol in (...) and gap_check = ? and pricedatetime between ? and ? and ( select count(*) from t15 where symbolid = candle_spikes.symbolid and pricedatetime between ? and ?) = ? order by pricedatetime desc limit ?;Times Reported Time consuming queries #12
Day Hour Count Duration Avg duration Jan 07 16 1 0ms 0ms 13 0ms 0ms 0ms 466 0ms commit;Times Reported Time consuming queries #13
Day Hour Count Duration Avg duration Jan 07 16 466 0ms 0ms 14 0ms 0ms 0ms 400 0ms with rar_max as ( select resultuid from relevance_keylevels_results order by resultuid desc limit ? ), kr as ( select a.*, rr.age, rr.relevant from keylevels_results a left outer join relevance_keylevels_results rr on a.resultuid = rr.resultuid where case when false = ? then true else a.resultuid > ( select min(resultuid) from relevance_keylevels_results) end ), all_results as ( select kr.resultuid as resultuid, kr.direction as direction, s.exchange as exchange, s.symbolid as symbolid, coalesce(bim.code, s.symbol) as symbol_code, s.longname as symbol_name, s.timegranularity as interval, p.patternname as pattern_name, kr.breakout as breakout, kr.atbaridentified as identified, dtt.timezone as timezone, kr.patternlengthbars as length, g.basegroupname, newlevels.filtered, case when kr.age is not null then kr.age when kr.resultuid <= rm.resultuid then ? else ? end as age, case when kr.relevant is not null then kr.relevant when kr.resultuid <= rm.resultuid then ? else ? end as relevant, cps.pip from kr inner join brokersymbollist bsl on bsl.brokerid = ? and bsl.symbolid = kr.symbolid inner join symbols s on bsl.symbolid = s.symbolid and s.nonliquid = ? inner join symbolgroup sg on s.symbolid = sg.symbolid inner join groups g on sg.groupid = g.groupid inner join brokergroups bg on g.groupid = bg.groupid and bsl.brokerid = bg.brokerid inner join hrspatterns p on kr.patternid = p.patternid inner join downloadersymbolsettings dss on s.symbolid = dss.symbolid inner join datafeedstimetable dtt on dss.classname = dtt.classname and dtt.dayofweek = ? inner join rar_max rm on ? = ? left outer join autochartist_symbolupdates au on dss.symbolid = au.symbolid left outer join relevance_keylevels_results rar on rar.resultuid = kr.resultuid left join lateral calc_kl_signal_filter (kr.resultuid) newlevels on true left join currencypips cps on cps.symbol = s.symbol left outer join brokerinstrumentmap bim on dss.datafeedinstrumentid = bim.datafeedinstrumentid and bim.brokerid = bsl.brokerid and bim.type = ? where kr.gmttimefound > now() - interval ? and dss.enabled = ? and s.deleted = ? and (kr.simulation = ? or kr.simulation is null) and (? = ? or s.timegranularity in (...)) and (? = ? or s.exchange in (...)) and (? = ? or coalesce(bim.code, s.symbol) in (...)) and (? = ? or p.patternname in (...)) and (? = ? or kr.patternclassid in (...)) and (? = ? or kr.patternlengthbars <= ?) and kr.patternstarttime::timestamp without time zone >= coalesce(au.earliestpricedatetime, ?::timestamp without time zone) -- to make sure patternstarttime is in our t-tables ), results as ( select distinct on (symbolid) * from all_results where (false = ? or relevant = ?) and (? = ? or age <= ?) order by symbolid, resultuid ) select * from results order by identified desc, length desc limit ?;Times Reported Time consuming queries #14
Day Hour Count Duration Avg duration Jan 07 16 400 0ms 0ms 15 0ms 0ms 0ms 240 0ms select count(*), sum(size), extract(epoch from now() - min(modification)) from pg_ls_waldir ();Times Reported Time consuming queries #15
Day Hour Count Duration Avg duration Jan 07 16 240 0ms 0ms 16 0ms 0ms 0ms 240 0ms select system_identifier from pg_control_system ();Times Reported Time consuming queries #16
Day Hour Count Duration Avg duration Jan 07 16 240 0ms 0ms 17 0ms 0ms 0ms 9 0ms select groupid, exchange, groupname, symbol, longname from prfsymboltree where brokerid = ? order by groupname, symbol;Times Reported Time consuming queries #17
Day Hour Count Duration Avg duration Jan 07 16 9 0ms 0ms 18 0ms 0ms 0ms 5 0ms insert into t15 (symbolid, pricedatetime, open, high, low, close, volume, bsf, sastdatetimereceived) values (?, ?::timestamp without time zone, ?, ?.?, ?.?, ?.?, ?, ?, ?::timestamp without time zone) on conflict (symbolid, pricedatetime) do nothing;Times Reported Time consuming queries #18
Day Hour Count Duration Avg duration Jan 07 16 5 0ms 0ms 19 0ms 0ms 0ms 1 0ms select * from processresults limit ?;Times Reported Time consuming queries #19
Day Hour Count Duration Avg duration Jan 07 16 1 0ms 0ms 20 0ms 0ms 0ms 8 0ms select updatedatafeedslatestrun (?);Times Reported Time consuming queries #20
Day Hour Count Duration Avg duration Jan 07 16 8 0ms 0ms Time consuming prepare
Rank Total duration Times executed Min duration Max duration Avg duration Query 1 10s933ms 14,226 0ms 25ms 0ms SELECT ;Times Reported Time consuming prepare #1
Day Hour Count Duration Avg duration Jan 07 16 14,226 10s933ms 0ms -
SELECT ;
Date: 2026-01-07 16:21:05 Duration: 25ms Database: postgres
-
SELECT ;
Date: 2026-01-07 16:45:39 Duration: 13ms Database: postgres
-
SELECT ;
Date: 2026-01-07 16:41:11 Duration: 12ms Database: postgres
2 6s695ms 6,283 0ms 24ms 1ms WITH rar_max as ( ;Times Reported Time consuming prepare #2
Day Hour Count Duration Avg duration 16 6,283 6s695ms 1ms -
WITH rar_max as ( ;
Date: 2026-01-07 16:11:57 Duration: 24ms Database: postgres
-
WITH rar_max as ( ;
Date: 2026-01-07 16:21:05 Duration: 19ms Database: postgres
-
WITH rar_max as ( ;
Date: 2026-01-07 16:32:05 Duration: 19ms Database: postgres
3 1s453ms 1,277 0ms 2ms 1ms SELECT symbolid, ;Times Reported Time consuming prepare #3
Day Hour Count Duration Avg duration 16 1,277 1s453ms 1ms -
SELECT symbolid, ;
Date: 2026-01-07 16:01:01 Duration: 2ms Database: postgres
-
SELECT symbolid, ;
Date: 2026-01-07 16:02:55 Duration: 2ms Database: postgres
-
SELECT symbolid, ;
Date: 2026-01-07 16:01:08 Duration: 2ms Database: postgres
4 1s77ms 8,291 0ms 9ms 0ms SET extra_float_digits = 3;Times Reported Time consuming prepare #4
Day Hour Count Duration Avg duration 16 8,291 1s77ms 0ms -
SET extra_float_digits = 3;
Date: 2026-01-07 16:54:15 Duration: 9ms Database: postgres
-
SET extra_float_digits = 3;
Date: 2026-01-07 16:41:41 Duration: 7ms Database: postgres
-
SET extra_float_digits = 3;
Date: 2026-01-07 16:41:30 Duration: 7ms Database: postgres
5 683ms 702 0ms 1ms 0ms SELECT s.symbolid, dss.downloadfrequency, dss.downloadersymbol;Times Reported Time consuming prepare #5
Day Hour Count Duration Avg duration 16 702 683ms 0ms -
SELECT s.symbolid, dss.downloadfrequency, dss.downloadersymbol;
Date: 2026-01-07 16:32:01 Duration: 1ms Database: postgres
-
SELECT s.symbolid, dss.downloadfrequency, dss.downloadersymbol;
Date: 2026-01-07 16:01:08 Duration: 1ms Database: postgres
-
SELECT s.symbolid, dss.downloadfrequency, dss.downloadersymbol;
Date: 2026-01-07 16:01:01 Duration: 1ms Database: postgres
6 648ms 12,703 0ms 8ms 0ms select 1;Times Reported Time consuming prepare #6
Day Hour Count Duration Avg duration 16 12,703 648ms 0ms -
select 1;
Date: 2026-01-07 16:12:27 Duration: 8ms Database: postgres
-
select 1;
Date: 2026-01-07 16:40:59 Duration: 7ms Database: postgres
-
select 1;
Date: 2026-01-07 16:45:41 Duration: 5ms Database: postgres
7 266ms 3,102 0ms 0ms 0ms INSERT INTO T30 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;Times Reported Time consuming prepare #7
Day Hour Count Duration Avg duration 16 3,102 266ms 0ms -
INSERT INTO T30 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:31:57 Duration: 0ms Database: postgres
-
INSERT INTO T30 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:11:56 Duration: 0ms Database: postgres
-
INSERT INTO T30 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:41:52 Duration: 0ms Database: postgres
8 202ms 2,078 0ms 0ms 0ms INSERT INTO T60 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;Times Reported Time consuming prepare #8
Day Hour Count Duration Avg duration 16 2,078 202ms 0ms -
INSERT INTO T60 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:11:56 Duration: 0ms Database: postgres
-
INSERT INTO T60 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:00:44 Duration: 0ms Database: postgres
-
INSERT INTO T60 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:32:17 Duration: 0ms Database: postgres
9 160ms 1,068 0ms 1ms 0ms INSERT INTO T15 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;Times Reported Time consuming prepare #9
Day Hour Count Duration Avg duration 16 1,068 160ms 0ms -
INSERT INTO T15 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:32:13 Duration: 1ms Database: postgres
-
INSERT INTO T15 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:46:43 Duration: 0ms Database: postgres
-
INSERT INTO T15 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:11:56 Duration: 0ms Database: postgres
10 109ms 16 5ms 8ms 6ms with sym_info as ( ;Times Reported Time consuming prepare #10
Day Hour Count Duration Avg duration 16 16 109ms 6ms -
with sym_info as ( ;
Date: 2026-01-07 16:06:57 Duration: 8ms Database: postgres
-
with sym_info as ( ;
Date: 2026-01-07 16:21:46 Duration: 7ms Database: postgres
-
with sym_info as ( ;
Date: 2026-01-07 16:21:43 Duration: 7ms Database: postgres
11 106ms 8,265 0ms 3ms 0ms SET application_name = 'PostgreSQL JDBC Driver';Times Reported Time consuming prepare #11
Day Hour Count Duration Avg duration 16 8,265 106ms 0ms -
SET application_name = 'PostgreSQL JDBC Driver';
Date: 2026-01-07 16:32:04 Duration: 3ms Database: postgres
-
SET application_name = 'PostgreSQL JDBC Driver';
Date: 2026-01-07 16:12:28 Duration: 1ms Database: postgres
-
SET application_name = 'PostgreSQL JDBC Driver';
Date: 2026-01-07 16:31:04 Duration: 1ms Database: postgres
12 103ms 908 0ms 0ms 0ms INSERT INTO T240 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;Times Reported Time consuming prepare #12
Day Hour Count Duration Avg duration 16 908 103ms 0ms -
INSERT INTO T240 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:10:40 Duration: 0ms Database: postgres
-
INSERT INTO T240 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:01:58 Duration: 0ms Database: postgres
-
INSERT INTO T240 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:00:13 Duration: 0ms Database: postgres
13 43ms 36 0ms 3ms 1ms WITH last_candle AS ( ;Times Reported Time consuming prepare #13
Day Hour Count Duration Avg duration 16 36 43ms 1ms -
WITH last_candle AS ( ;
Date: 2026-01-07 16:04:00 Duration: 3ms Database: postgres
-
WITH last_candle AS ( ;
Date: 2026-01-07 16:16:00 Duration: 3ms Database: postgres
-
WITH last_candle AS ( ;
Date: 2026-01-07 16:12:00 Duration: 3ms Database: postgres
14 42ms 18 1ms 3ms 2ms select cast(count(*) / cast(setting as numeric) * 100 as int) from pg_stat_activity, pg_settings WHERE name = 'max_connections' group by setting;Times Reported Time consuming prepare #14
Day Hour Count Duration Avg duration 16 18 42ms 2ms -
select cast(count(*) / cast(setting as numeric) * 100 as int) from pg_stat_activity, pg_settings WHERE name = 'max_connections' group by setting;
Date: 2026-01-07 16:01:05 Duration: 3ms Database: postgres
-
select cast(count(*) / cast(setting as numeric) * 100 as int) from pg_stat_activity, pg_settings WHERE name = 'max_connections' group by setting;
Date: 2026-01-07 16:11:01 Duration: 2ms Database: postgres
-
select cast(count(*) / cast(setting as numeric) * 100 as int) from pg_stat_activity, pg_settings WHERE name = 'max_connections' group by setting;
Date: 2026-01-07 16:10:03 Duration: 2ms Database: postgres
15 38ms 248 0ms 0ms 0ms SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME, CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema' WHEN true THEN CASE WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TABLE' WHEN 'v' THEN 'SYSTEM VIEW' WHEN 'i' THEN 'SYSTEM INDEX' ELSE NULL END WHEN n.nspname = 'pg_toast' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TOAST TABLE' WHEN 'i' THEN 'SYSTEM TOAST INDEX' ELSE NULL END ELSE CASE c.relkind WHEN 'r' THEN 'TEMPORARY TABLE' WHEN 'p' THEN 'TEMPORARY TABLE' WHEN 'i' THEN 'TEMPORARY INDEX' WHEN 'S' THEN 'TEMPORARY SEQUENCE' WHEN 'v' THEN 'TEMPORARY VIEW' ELSE NULL END END WHEN false THEN CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'p' THEN 'PARTITIONED TABLE' WHEN 'i' THEN 'INDEX' WHEN 'S' THEN 'SEQUENCE' WHEN 'v' THEN 'VIEW' WHEN 'c' THEN 'TYPE' WHEN 'f' THEN 'FOREIGN TABLE' WHEN 'm' THEN 'MATERIALIZED VIEW' ELSE NULL END ELSE NULL END AS TABLE_TYPE, d.description AS REMARKS, '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0) LEFT JOIN pg_catalog.pg_class dc ON (d.classoid = dc.oid AND dc.relname = 'pg_class') LEFT JOIN pg_catalog.pg_namespace dn ON (dn.oid = dc.relnamespace AND dn.nspname = 'pg_catalog') WHERE c.relnamespace = n.oid AND c.relname LIKE 'PROBABLYNOT' AND (false OR (c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema')) ORDER BY TABLE_TYPE, TABLE_SCHEM, TABLE_NAME;Times Reported Time consuming prepare #15
Day Hour Count Duration Avg duration 16 248 38ms 0ms -
SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME, CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema' WHEN true THEN CASE WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TABLE' WHEN 'v' THEN 'SYSTEM VIEW' WHEN 'i' THEN 'SYSTEM INDEX' ELSE NULL END WHEN n.nspname = 'pg_toast' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TOAST TABLE' WHEN 'i' THEN 'SYSTEM TOAST INDEX' ELSE NULL END ELSE CASE c.relkind WHEN 'r' THEN 'TEMPORARY TABLE' WHEN 'p' THEN 'TEMPORARY TABLE' WHEN 'i' THEN 'TEMPORARY INDEX' WHEN 'S' THEN 'TEMPORARY SEQUENCE' WHEN 'v' THEN 'TEMPORARY VIEW' ELSE NULL END END WHEN false THEN CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'p' THEN 'PARTITIONED TABLE' WHEN 'i' THEN 'INDEX' WHEN 'S' THEN 'SEQUENCE' WHEN 'v' THEN 'VIEW' WHEN 'c' THEN 'TYPE' WHEN 'f' THEN 'FOREIGN TABLE' WHEN 'm' THEN 'MATERIALIZED VIEW' ELSE NULL END ELSE NULL END AS TABLE_TYPE, d.description AS REMARKS, '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0) LEFT JOIN pg_catalog.pg_class dc ON (d.classoid = dc.oid AND dc.relname = 'pg_class') LEFT JOIN pg_catalog.pg_namespace dn ON (dn.oid = dc.relnamespace AND dn.nspname = 'pg_catalog') WHERE c.relnamespace = n.oid AND c.relname LIKE 'PROBABLYNOT' AND (false OR (c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema')) ORDER BY TABLE_TYPE, TABLE_SCHEM, TABLE_NAME;
Date: 2026-01-07 16:13:26 Duration: 0ms Database: postgres
-
SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME, CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema' WHEN true THEN CASE WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TABLE' WHEN 'v' THEN 'SYSTEM VIEW' WHEN 'i' THEN 'SYSTEM INDEX' ELSE NULL END WHEN n.nspname = 'pg_toast' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TOAST TABLE' WHEN 'i' THEN 'SYSTEM TOAST INDEX' ELSE NULL END ELSE CASE c.relkind WHEN 'r' THEN 'TEMPORARY TABLE' WHEN 'p' THEN 'TEMPORARY TABLE' WHEN 'i' THEN 'TEMPORARY INDEX' WHEN 'S' THEN 'TEMPORARY SEQUENCE' WHEN 'v' THEN 'TEMPORARY VIEW' ELSE NULL END END WHEN false THEN CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'p' THEN 'PARTITIONED TABLE' WHEN 'i' THEN 'INDEX' WHEN 'S' THEN 'SEQUENCE' WHEN 'v' THEN 'VIEW' WHEN 'c' THEN 'TYPE' WHEN 'f' THEN 'FOREIGN TABLE' WHEN 'm' THEN 'MATERIALIZED VIEW' ELSE NULL END ELSE NULL END AS TABLE_TYPE, d.description AS REMARKS, '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0) LEFT JOIN pg_catalog.pg_class dc ON (d.classoid = dc.oid AND dc.relname = 'pg_class') LEFT JOIN pg_catalog.pg_namespace dn ON (dn.oid = dc.relnamespace AND dn.nspname = 'pg_catalog') WHERE c.relnamespace = n.oid AND c.relname LIKE 'PROBABLYNOT' AND (false OR (c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema')) ORDER BY TABLE_TYPE, TABLE_SCHEM, TABLE_NAME;
Date: 2026-01-07 16:13:26 Duration: 0ms Database: postgres
-
SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME, CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema' WHEN true THEN CASE WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TABLE' WHEN 'v' THEN 'SYSTEM VIEW' WHEN 'i' THEN 'SYSTEM INDEX' ELSE NULL END WHEN n.nspname = 'pg_toast' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TOAST TABLE' WHEN 'i' THEN 'SYSTEM TOAST INDEX' ELSE NULL END ELSE CASE c.relkind WHEN 'r' THEN 'TEMPORARY TABLE' WHEN 'p' THEN 'TEMPORARY TABLE' WHEN 'i' THEN 'TEMPORARY INDEX' WHEN 'S' THEN 'TEMPORARY SEQUENCE' WHEN 'v' THEN 'TEMPORARY VIEW' ELSE NULL END END WHEN false THEN CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'p' THEN 'PARTITIONED TABLE' WHEN 'i' THEN 'INDEX' WHEN 'S' THEN 'SEQUENCE' WHEN 'v' THEN 'VIEW' WHEN 'c' THEN 'TYPE' WHEN 'f' THEN 'FOREIGN TABLE' WHEN 'm' THEN 'MATERIALIZED VIEW' ELSE NULL END ELSE NULL END AS TABLE_TYPE, d.description AS REMARKS, '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0) LEFT JOIN pg_catalog.pg_class dc ON (d.classoid = dc.oid AND dc.relname = 'pg_class') LEFT JOIN pg_catalog.pg_namespace dn ON (dn.oid = dc.relnamespace AND dn.nspname = 'pg_catalog') WHERE c.relnamespace = n.oid AND c.relname LIKE 'PROBABLYNOT' AND (false OR (c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema')) ORDER BY TABLE_TYPE, TABLE_SCHEM, TABLE_NAME;
Date: 2026-01-07 16:13:26 Duration: 0ms Database: postgres
16 30ms 200 0ms 0ms 0ms select category, ;Times Reported Time consuming prepare #16
Day Hour Count Duration Avg duration 16 200 30ms 0ms -
select category, ;
Date: 2026-01-07 16:41:55 Duration: 0ms Database: postgres
-
select category, ;
Date: 2026-01-07 16:41:19 Duration: 0ms Database: postgres
-
select category, ;
Date: 2026-01-07 16:31:04 Duration: 0ms Database: postgres
17 23ms 24 0ms 1ms 0ms select distinct classname, to_char(created_datetime, 'yyyy-mm-dd HH24:MI'), to_char(cleared_datetime, 'yyyy-mm-dd HH24:MI'), action_to_take, description, created_datetime from datafeed_restarter_events where (is_current_entry = 1 OR cleared_datetime > current_timestamp - interval '17 hour') order by created_datetime desc;Times Reported Time consuming prepare #17
Day Hour Count Duration Avg duration 16 24 23ms 0ms -
select distinct classname, to_char(created_datetime, 'yyyy-mm-dd HH24:MI'), to_char(cleared_datetime, 'yyyy-mm-dd HH24:MI'), action_to_take, description, created_datetime from datafeed_restarter_events where (is_current_entry = 1 OR cleared_datetime > current_timestamp - interval '17 hour') order by created_datetime desc;
Date: 2026-01-07 16:17:42 Duration: 1ms Database: postgres
-
select distinct classname, to_char(created_datetime, 'yyyy-mm-dd HH24:MI'), to_char(cleared_datetime, 'yyyy-mm-dd HH24:MI'), action_to_take, description, created_datetime from datafeed_restarter_events where (is_current_entry = 1 OR cleared_datetime > current_timestamp - interval '17 hour') order by created_datetime desc;
Date: 2026-01-07 16:02:50 Duration: 1ms Database: postgres
-
select distinct classname, to_char(created_datetime, 'yyyy-mm-dd HH24:MI'), to_char(cleared_datetime, 'yyyy-mm-dd HH24:MI'), action_to_take, description, created_datetime from datafeed_restarter_events where (is_current_entry = 1 OR cleared_datetime > current_timestamp - interval '17 hour') order by created_datetime desc;
Date: 2026-01-07 16:32:44 Duration: 1ms Database: postgres
18 17ms 24 0ms 1ms 0ms select feedname, to_char(latestrxtime, 'yyyy-mm-dd HH24:MI'), to_char(LatestDBWriteTime, 'yyyy-mm-dd HH24:MI'), to_char(LatestStartupTime, 'yyyy-mm-dd HH24:MI'), StartupTimeInMinutes, dm.source_type, dm.transport_type, case when latestrxtime < (CURRENT_TIMESTAMP - 5 * interval '1 minute') then 'X' else 'OK' end, case when (feedname ilike '%_EOD' OR feedname ilike 'IQFEED_DAILIES' or feedname ilike 'YAHOO%' or feedname ilike 'QUANDL_FUTURES%' or feedname ilike 'BAR_CHART') then case when LatestDBWriteTime < (CURRENT_TIMESTAMP - 24 * interval '1 hour') then 'X' else 'OK' end else case when (LatestDBWriteTime < (CURRENT_TIMESTAMP - 15 * interval '1 minute') and LatestStartupTime < (CURRENT_TIMESTAMP - 30 * interval '1 minute')) OR latestrxtime < CURRENT_TIMESTAMP - interval '2 hour' then 'X' else 'OK' end end as statusDB, comment from datafeeds_latestrun dlr left outer join datafeeds df on dlr.feedname ilike df.name inner join datafeeds_metadata dm on df.metadata_id = dm.id order by feedname;Times Reported Time consuming prepare #18
Day Hour Count Duration Avg duration 16 24 17ms 0ms -
select feedname, to_char(latestrxtime, 'yyyy-mm-dd HH24:MI'), to_char(LatestDBWriteTime, 'yyyy-mm-dd HH24:MI'), to_char(LatestStartupTime, 'yyyy-mm-dd HH24:MI'), StartupTimeInMinutes, dm.source_type, dm.transport_type, case when latestrxtime < (CURRENT_TIMESTAMP - 5 * interval '1 minute') then 'X' else 'OK' end, case when (feedname ilike '%_EOD' OR feedname ilike 'IQFEED_DAILIES' or feedname ilike 'YAHOO%' or feedname ilike 'QUANDL_FUTURES%' or feedname ilike 'BAR_CHART') then case when LatestDBWriteTime < (CURRENT_TIMESTAMP - 24 * interval '1 hour') then 'X' else 'OK' end else case when (LatestDBWriteTime < (CURRENT_TIMESTAMP - 15 * interval '1 minute') and LatestStartupTime < (CURRENT_TIMESTAMP - 30 * interval '1 minute')) OR latestrxtime < CURRENT_TIMESTAMP - interval '2 hour' then 'X' else 'OK' end end as statusDB, comment from datafeeds_latestrun dlr left outer join datafeeds df on dlr.feedname ilike df.name inner join datafeeds_metadata dm on df.metadata_id = dm.id order by feedname;
Date: 2026-01-07 16:02:50 Duration: 1ms Database: postgres
-
select feedname, to_char(latestrxtime, 'yyyy-mm-dd HH24:MI'), to_char(LatestDBWriteTime, 'yyyy-mm-dd HH24:MI'), to_char(LatestStartupTime, 'yyyy-mm-dd HH24:MI'), StartupTimeInMinutes, dm.source_type, dm.transport_type, case when latestrxtime < (CURRENT_TIMESTAMP - 5 * interval '1 minute') then 'X' else 'OK' end, case when (feedname ilike '%_EOD' OR feedname ilike 'IQFEED_DAILIES' or feedname ilike 'YAHOO%' or feedname ilike 'QUANDL_FUTURES%' or feedname ilike 'BAR_CHART') then case when LatestDBWriteTime < (CURRENT_TIMESTAMP - 24 * interval '1 hour') then 'X' else 'OK' end else case when (LatestDBWriteTime < (CURRENT_TIMESTAMP - 15 * interval '1 minute') and LatestStartupTime < (CURRENT_TIMESTAMP - 30 * interval '1 minute')) OR latestrxtime < CURRENT_TIMESTAMP - interval '2 hour' then 'X' else 'OK' end end as statusDB, comment from datafeeds_latestrun dlr left outer join datafeeds df on dlr.feedname ilike df.name inner join datafeeds_metadata dm on df.metadata_id = dm.id order by feedname;
Date: 2026-01-07 16:32:44 Duration: 1ms Database: postgres
-
select feedname, to_char(latestrxtime, 'yyyy-mm-dd HH24:MI'), to_char(LatestDBWriteTime, 'yyyy-mm-dd HH24:MI'), to_char(LatestStartupTime, 'yyyy-mm-dd HH24:MI'), StartupTimeInMinutes, dm.source_type, dm.transport_type, case when latestrxtime < (CURRENT_TIMESTAMP - 5 * interval '1 minute') then 'X' else 'OK' end, case when (feedname ilike '%_EOD' OR feedname ilike 'IQFEED_DAILIES' or feedname ilike 'YAHOO%' or feedname ilike 'QUANDL_FUTURES%' or feedname ilike 'BAR_CHART') then case when LatestDBWriteTime < (CURRENT_TIMESTAMP - 24 * interval '1 hour') then 'X' else 'OK' end else case when (LatestDBWriteTime < (CURRENT_TIMESTAMP - 15 * interval '1 minute') and LatestStartupTime < (CURRENT_TIMESTAMP - 30 * interval '1 minute')) OR latestrxtime < CURRENT_TIMESTAMP - interval '2 hour' then 'X' else 'OK' end end as statusDB, comment from datafeeds_latestrun dlr left outer join datafeeds df on dlr.feedname ilike df.name inner join datafeeds_metadata dm on df.metadata_id = dm.id order by feedname;
Date: 2026-01-07 16:17:42 Duration: 0ms Database: postgres
19 15ms 6 2ms 3ms 2ms select client_addr, count(1) from pg_stat_activity, pg_settings WHERE name = 'max_connections' group by client_addr, setting having (client_addr is not null OR (client_addr is null and count(1) > (cast(setting as numeric) / 3 * 2))) order by count desc;Times Reported Time consuming prepare #19
Day Hour Count Duration Avg duration 16 6 15ms 2ms -
select client_addr, count(1) from pg_stat_activity, pg_settings WHERE name = 'max_connections' group by client_addr, setting having (client_addr is not null OR (client_addr is null and count(1) > (cast(setting as numeric) / 3 * 2))) order by count desc;
Date: 2026-01-07 16:40:04 Duration: 3ms Database: postgres
-
select client_addr, count(1) from pg_stat_activity, pg_settings WHERE name = 'max_connections' group by client_addr, setting having (client_addr is not null OR (client_addr is null and count(1) > (cast(setting as numeric) / 3 * 2))) order by count desc;
Date: 2026-01-07 16:00:04 Duration: 2ms Database: postgres
-
select client_addr, count(1) from pg_stat_activity, pg_settings WHERE name = 'max_connections' group by client_addr, setting having (client_addr is not null OR (client_addr is null and count(1) > (cast(setting as numeric) / 3 * 2))) order by count desc;
Date: 2026-01-07 16:30:04 Duration: 2ms Database: postgres
20 14ms 6 2ms 3ms 2ms with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;Times Reported Time consuming prepare #20
Day Hour Count Duration Avg duration 16 6 14ms 2ms -
with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;
Date: 2026-01-07 16:30:03 Duration: 3ms Database: postgres
-
with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;
Date: 2026-01-07 16:00:02 Duration: 2ms Database: postgres
-
with rankedmt4 as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors ), last_feed_entry as ( select * from rankedmt4 where r = 1 ), ok_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where status = 'OK' ), earliest_entry_after_ok as ( select m.datafeedname, min(m.eventtimestamp) as eventtimestamp from mt4datafeederrors m left outer join ( select datafeedname, eventtimestamp from ok_entries where r = 1) oo on m.datafeedname = oo.datafeedname where m.eventtimestamp > coalesce(oo.eventtimestamp, '1900-01-01'::timestamp without time zone) group by m.datafeedname ), notified_entries as ( select *, row_number() over (partition by datafeedname order by eventtimestamp desc) r from mt4datafeederrors where notified is not null and notified <> '' ), broker as ( select *, row_number() over (partition by feedname order by brokerid) r from ( select distinct b.brokerid, b.name as brokername, dss.classname as feedname from downloadersymbolsettings dss inner join brokersymbollist bsl on dss.symbolid = bsl.symbolid inner join broker b on bsl.brokerid = b.brokerid where dss.enabled = 1) a ) select last.id, last.datafeedname, last.eventtimestamp, last.status, last.errordescription, last.serveraddress, last.username, note.notified, note.eventtimestamp, broker.brokername from last_feed_entry last inner join earliest_entry_after_ok after_ok on last.datafeedname = after_ok.datafeedname inner join broker on last.datafeedname = broker.feedname left outer join ok_entries ok on ok.datafeedname = last.datafeedname left outer join notified_entries note on note.datafeedname = last.datafeedname and note.r = 1 where (ok.r is null or ok.r = 1) and last.datafeedname not in ( select distinct datafeedname from last_feed_entry where status = 'OK') and extract(epoch from (last.eventtimestamp - after_ok.eventtimestamp)) > 60 * 60 and last.eventtimestamp > current_timestamp - interval '1 day' and (note.eventtimestamp is null or note.eventtimestamp < current_timestamp - interval '10 hours') and last.eventtimestamp > current_timestamp - interval '1 hour' and broker.r = 1;
Date: 2026-01-07 16:20:02 Duration: 2ms Database: postgres
Time consuming bind
Rank Total duration Times executed Min duration Max duration Avg duration Query 1 57s203ms 11,297 0ms 54ms 5ms WITH rar_max as ( ;Times Reported Time consuming bind #1
Day Hour Count Duration Avg duration Jan 07 16 11,297 57s203ms 5ms -
WITH rar_max as ( ;
Date: 2026-01-07 16:41:10 Duration: 54ms Database: postgres parameters: $1 = 't', $2 = '489', $3 = '0', $4 = '0', $5 = '0', $6 = '', $7 = '0', $8 = '', $9 = '0', $10 = '', $11 = '0', $12 = '0', $13 = '0', $14 = '0', $15 = '0', $16 = 't', $17 = '0', $18 = '0'
-
WITH rar_max as ( ;
Date: 2026-01-07 16:21:05 Duration: 51ms Database: postgres parameters: $1 = '607462734292960301', $2 = '607462734292960301', $3 = '607462734292960301'
-
WITH rar_max as ( ;
Date: 2026-01-07 16:56:24 Duration: 50ms Database: postgres parameters: $1 = '607463141104208301', $2 = '607463141104208301', $3 = '607463141104208301'
2 33s951ms 72,825 0ms 31ms 0ms SELECT ;Times Reported Time consuming bind #2
Day Hour Count Duration Avg duration 16 72,825 33s951ms 0ms -
SELECT ;
Date: 2026-01-07 16:21:05 Duration: 31ms Database: postgres parameters: $1 = '958', $2 = '958', $3 = '515840248627472300'
-
SELECT ;
Date: 2026-01-07 16:02:06 Duration: 30ms Database: postgres parameters: $1 = '958', $2 = '958', $3 = '515840248630020300'
-
SELECT ;
Date: 2026-01-07 16:11:57 Duration: 20ms Database: postgres parameters: $1 = '958', $2 = '958', $3 = '515840243153282300'
3 2s648ms 1,277 0ms 12ms 2ms SELECT symbolid, ;Times Reported Time consuming bind #3
Day Hour Count Duration Avg duration 16 1,277 2s648ms 2ms -
SELECT symbolid, ;
Date: 2026-01-07 16:40:56 Duration: 12ms Database: postgres parameters: $1 = 'BDSWISS', $2 = '15', $3 = 'SPX500', $4 = 'US30'
-
SELECT symbolid, ;
Date: 2026-01-07 16:00:03 Duration: 6ms Database: postgres parameters: $1 = 'MILLENNIUMPF', $2 = '15', $3 = 'AUDCAD.FX'
-
SELECT symbolid, ;
Date: 2026-01-07 16:46:55 Duration: 4ms Database: postgres parameters: $1 = 'GLOBALGTMT5', $2 = '15', $3 = 'NZDUSD', $4 = 'SHBUSD', $5 = 'SOLUSD'
4 1s439ms 74,893 0ms 14ms 0ms select 1;Times Reported Time consuming bind #4
Day Hour Count Duration Avg duration 16 74,892 1s439ms 0ms 17 1 0ms 0ms -
select 1;
Date: 2026-01-07 16:00:02 Duration: 14ms Database: postgres
-
select 1;
Date: 2026-01-07 16:41:11 Duration: 5ms Database: postgres
-
select 1;
Date: 2026-01-07 16:55:40 Duration: 3ms Database: postgres
5 1s122ms 702 1ms 6ms 1ms SELECT s.symbolid, dss.downloadfrequency, dss.downloadersymbol;Times Reported Time consuming bind #5
Day Hour Count Duration Avg duration 16 702 1s122ms 1ms -
SELECT s.symbolid, dss.downloadfrequency, dss.downloadersymbol;
Date: 2026-01-07 16:32:01 Duration: 6ms Database: postgres parameters: $1 = 'FPMARKETS'
-
SELECT s.symbolid, dss.downloadfrequency, dss.downloadersymbol;
Date: 2026-01-07 16:01:01 Duration: 4ms Database: postgres parameters: $1 = 'FPMARKETS'
-
SELECT s.symbolid, dss.downloadfrequency, dss.downloadersymbol;
Date: 2026-01-07 16:00:02 Duration: 3ms Database: postgres parameters: $1 = 'FPMARKETS'
6 698ms 16 31ms 47ms 43ms with sym_info as ( ;Times Reported Time consuming bind #6
Day Hour Count Duration Avg duration 16 16 698ms 43ms -
with sym_info as ( ;
Date: 2026-01-07 16:06:57 Duration: 47ms Database: postgres parameters: $1 = '692', $2 = 'Forex', $3 = 'Forex', $4 = '692', $5 = 'Forex', $6 = '692', $7 = '692', $8 = 'Forex', $9 = '692'
-
with sym_info as ( ;
Date: 2026-01-07 16:21:46 Duration: 45ms Database: postgres parameters: $1 = '617', $2 = 'Forex', $3 = 'Forex', $4 = '617', $5 = 'Forex', $6 = '617', $7 = '617', $8 = 'Forex', $9 = '617'
-
with sym_info as ( ;
Date: 2026-01-07 16:21:43 Duration: 44ms Database: postgres parameters: $1 = '620', $2 = 'Forex', $3 = 'Forex', $4 = '620', $5 = 'Forex', $6 = '620', $7 = '620', $8 = 'Forex', $9 = '620'
7 548ms 76 4ms 36ms 7ms WITH last_candle AS ( ;Times Reported Time consuming bind #7
Day Hour Count Duration Avg duration 16 76 548ms 7ms -
WITH last_candle AS ( ;
Date: 2026-01-07 16:32:01 Duration: 36ms Database: postgres parameters: $1 = '489', $2 = '489'
-
WITH last_candle AS ( ;
Date: 2026-01-07 16:16:00 Duration: 12ms Database: postgres parameters: $1 = '558', $2 = '558'
-
WITH last_candle AS ( ;
Date: 2026-01-07 16:32:01 Duration: 12ms Database: postgres parameters: $1 = '558', $2 = '558'
8 447ms 48 0ms 20ms 9ms WITH /*Latest.JapSticks*/ all_results AS ( SELECT ;Times Reported Time consuming bind #8
Day Hour Count Duration Avg duration 16 48 447ms 9ms -
WITH /*Latest.JapSticks*/ all_results AS ( SELECT ;
Date: 2026-01-07 16:31:22 Duration: 20ms Database: postgres parameters: $1 = '489', $2 = '0', $3 = '0', $4 = '0', $5 = '', $6 = '0', $7 = '', $8 = '0', $9 = '', $10 = '0', $11 = '0'
-
WITH /*Latest.JapSticks*/ all_results AS ( SELECT ;
Date: 2026-01-07 16:26:16 Duration: 19ms Database: postgres parameters: $1 = '489', $2 = '0', $3 = '0', $4 = '0', $5 = '', $6 = '0', $7 = '', $8 = '0', $9 = '', $10 = '0', $11 = '0'
-
WITH /*Latest.JapSticks*/ all_results AS ( SELECT ;
Date: 2026-01-07 16:16:12 Duration: 19ms Database: postgres parameters: $1 = '489', $2 = '0', $3 = '0', $4 = '0', $5 = '', $6 = '0', $7 = '', $8 = '0', $9 = '', $10 = '0', $11 = '0'
9 434ms 21 0ms 36ms 20ms with wh_patitioned as ( ;Times Reported Time consuming bind #9
Day Hour Count Duration Avg duration 16 21 434ms 20ms -
with wh_patitioned as ( ;
Date: 2026-01-07 16:21:01 Duration: 36ms Database: postgres parameters: $1 = '558', $2 = '558', $3 = '558', $4 = '558', $5 = '558', $6 = '558', $7 = '558', $8 = '558', $9 = '558'
-
with wh_patitioned as ( ;
Date: 2026-01-07 16:40:02 Duration: 35ms Database: postgres parameters: $1 = '558', $2 = '558', $3 = '558', $4 = '558', $5 = '558', $6 = '558', $7 = '558', $8 = '558', $9 = '558'
-
with wh_patitioned as ( ;
Date: 2026-01-07 16:40:54 Duration: 31ms Database: postgres parameters: $1 = '558', $2 = '558', $3 = '558', $4 = '558', $5 = '558', $6 = '558', $7 = '558', $8 = '558', $9 = '558'
10 274ms 52 0ms 18ms 5ms select distinct s.statsid as statsid, sy.exchange as name;Times Reported Time consuming bind #10
Day Hour Count Duration Avg duration 16 52 274ms 5ms -
select distinct s.statsid as statsid, sy.exchange as name;
Date: 2026-01-07 16:31:03 Duration: 18ms Database: postgres parameters: $1 = '627', $2 = '627'
-
select distinct s.statsid as statsid, sy.exchange as name;
Date: 2026-01-07 16:31:03 Duration: 18ms Database: postgres parameters: $1 = '631', $2 = '631'
-
select distinct s.statsid as statsid, sy.exchange as name;
Date: 2026-01-07 16:31:02 Duration: 18ms Database: postgres parameters: $1 = '621', $2 = '621'
11 248ms 3,358 0ms 1ms 0ms INSERT INTO T30 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;Times Reported Time consuming bind #11
Day Hour Count Duration Avg duration 16 3,358 248ms 0ms -
INSERT INTO T30 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:30:04 Duration: 1ms Database: postgres parameters: $1 = '2026-01-07 16:00:00', $2 = '5932.9', $3 = '5933.9', $4 = '5925.9', $5 = '5927.9', $6 = '402', $7 = '500991628268184200', $8 = '0', $9 = '2026-01-07 16:30:04.79', $10 = '2026-01-07 16:30:04.79', $11 = '5932.9', $12 = '5933.9', $13 = '5925.9', $14 = '5927.9', $15 = '402', $16 = '0', $17 = '2026-01-07 16:30:04.79', $18 = '2026-01-07 16:30:04.79'
-
INSERT INTO T30 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:01:07 Duration: 0ms Database: postgres parameters: $1 = '2026-01-07 15:30:00', $2 = '462.54', $3 = '462.74', $4 = '462.34', $5 = '462.69', $6 = '109', $7 = '500991628279784200', $8 = '0', $9 = '2026-01-07 16:01:07.943', $10 = '2026-01-07 16:01:07.876', $11 = '462.54', $12 = '462.74', $13 = '462.34', $14 = '462.69', $15 = '109', $16 = '0', $17 = '2026-01-07 16:01:07.943', $18 = '2026-01-07 16:01:07.876'
-
INSERT INTO T30 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:16:55 Duration: 0ms Database: postgres parameters: $1 = '2026-01-06 22:30:00', $2 = '313.615', $3 = '314.705', $4 = '313.315', $5 = '314.635', $6 = '1762', $7 = '515840249421620300', $8 = '0', $9 = '2026-01-07 16:16:55.527', $10 = '2026-01-07 16:16:55.423', $11 = '313.615', $12 = '314.705', $13 = '313.315', $14 = '314.635', $15 = '1762', $16 = '0', $17 = '2026-01-07 16:16:55.527', $18 = '2026-01-07 16:16:55.423'
12 244ms 5,750 0ms 0ms 0ms INSERT INTO T15 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;Times Reported Time consuming bind #12
Day Hour Count Duration Avg duration 16 5,750 244ms 0ms -
INSERT INTO T15 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:26:52 Duration: 0ms Database: postgres parameters: $1 = '2026-01-07 16:00:00', $2 = '25585.8', $3 = '25592.3', $4 = '25571.3', $5 = '25589.8', $6 = '3316', $7 = '515840248038958300', $8 = '0', $9 = '2026-01-07 16:26:52.862', $10 = '2026-01-07 16:26:52.756', $11 = '25585.8', $12 = '25592.3', $13 = '25571.3', $14 = '25589.8', $15 = '3316', $16 = '0', $17 = '2026-01-07 16:26:52.862', $18 = '2026-01-07 16:26:52.756'
-
INSERT INTO T15 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:11:56 Duration: 0ms Database: postgres parameters: $1 = '2026-01-07 15:45:00', $2 = '49533.9', $3 = '49549.39', $4 = '49523.1', $5 = '49543.4', $6 = '2948', $7 = '515840248000537300', $8 = '0', $9 = '2026-01-07 16:11:56.918', $10 = '2026-01-07 16:11:56.82', $11 = '49533.9', $12 = '49549.39', $13 = '49523.1', $14 = '49543.4', $15 = '2948', $16 = '0', $17 = '2026-01-07 16:11:56.918', $18 = '2026-01-07 16:11:56.82'
-
INSERT INTO T15 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:26:40 Duration: 0ms Database: postgres parameters: $1 = '2026-01-07 16:00:00', $2 = '8706.4', $3 = '8709.9', $4 = '8704.9', $5 = '8707.85', $6 = '699', $7 = '515840248015086300', $8 = '0', $9 = '2026-01-07 16:26:40.752', $10 = '2026-01-07 16:26:40.681', $11 = '8706.4', $12 = '8709.9', $13 = '8704.9', $14 = '8707.85', $15 = '699', $16 = '0', $17 = '2026-01-07 16:26:40.752', $18 = '2026-01-07 16:26:40.681'
13 183ms 2,239 0ms 0ms 0ms INSERT INTO T60 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;Times Reported Time consuming bind #13
Day Hour Count Duration Avg duration 16 2,239 183ms 0ms -
INSERT INTO T60 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:00:03 Duration: 0ms Database: postgres parameters: $1 = '2026-01-07 15:00:00', $2 = '11.7526', $3 = '11.76012', $4 = '11.73922', $5 = '11.73958', $6 = '13322', $7 = '500991628209882200', $8 = '0', $9 = '2026-01-07 16:00:03.434', $10 = '2026-01-07 16:00:03.322', $11 = '11.7526', $12 = '11.76012', $13 = '11.73922', $14 = '11.73958', $15 = '13322', $16 = '0', $17 = '2026-01-07 16:00:03.434', $18 = '2026-01-07 16:00:03.322'
-
INSERT INTO T60 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:11:56 Duration: 0ms Database: postgres parameters: $1 = '2026-01-07 15:00:00', $2 = '49490.9', $3 = '49549.39', $4 = '49481.6', $5 = '49543.4', $6 = '10917', $7 = '515840248000890300', $8 = '0', $9 = '2026-01-07 16:11:56.95', $10 = '2026-01-07 16:11:56.839', $11 = '49490.9', $12 = '49549.39', $13 = '49481.6', $14 = '49543.4', $15 = '10917', $16 = '0', $17 = '2026-01-07 16:11:56.95', $18 = '2026-01-07 16:11:56.839'
-
INSERT INTO T60 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:01:02 Duration: 0ms Database: postgres parameters: $1 = '2026-01-07 15:00:00', $2 = '38.34', $3 = '38.34', $4 = '38.18', $5 = '38.19', $6 = '612', $7 = '515840246003909300', $8 = '0', $9 = '2026-01-07 16:01:02.844', $10 = '2026-01-07 16:01:02.713', $11 = '38.34', $12 = '38.34', $13 = '38.18', $14 = '38.19', $15 = '612', $16 = '0', $17 = '2026-01-07 16:01:02.844', $18 = '2026-01-07 16:01:02.713'
14 180ms 2,432 0ms 1ms 0ms select category, ;Times Reported Time consuming bind #14
Day Hour Count Duration Avg duration 16 2,432 180ms 0ms -
select category, ;
Date: 2026-01-07 16:31:02 Duration: 1ms Database: postgres parameters: $1 = '515852059324638307', $2 = 'symbol', $3 = 'NGAS', $4 = 'NIKKEI', $5 = 'XAUUSD', $6 = 'CL', $7 = 'DOW', $8 = 'SP', $9 = 'NSDQ', $10 = 'ASX', $11 = 'FTSE', $12 = 'DAX', $13 = 'XAGUSD', $14 = 'XPTUSD', $15 = 'XPDUSD', $16 = 'CAC', $17 = 'NGAS', $18 = 'CL', $19 = 'ASX', $20 = 'XAGUSD', $21 = 'XPDUSD', $22 = 'SP', $23 = 'NIKKEI', $24 = 'FTSE', $25 = 'XAUUSD', $26 = 'XPTUSD', $27 = 'NSDQ', $28 = 'DOW', $29 = 'DAX', $30 = 'CAC', $31 = '515852059324638307', $32 = 'symbol', $33 = 'NGAS', $34 = 'NIKKEI', $35 = 'XAUUSD', $36 = 'CL', $37 = 'DOW', $38 = 'SP', $39 = 'NSDQ', $40 = 'ASX', $41 = 'FTSE', $42 = 'DAX', $43 = 'XAGUSD', $44 = 'XPTUSD', $45 = 'XPDUSD', $46 = 'CAC', $47 = 'NGAS', $48 = 'CL', $49 = 'ASX', $50 = 'XAGUSD', $51 = 'XPDUSD', $52 = 'SP', $53 = 'NIKKEI', $54 = 'FTSE', $55 = 'XAUUSD', $56 = 'XPTUSD', $57 = 'NSDQ', $58 = 'DOW', $59 = 'DAX', $60 = 'CAC'
-
select category, ;
Date: 2026-01-07 16:41:55 Duration: 1ms Database: postgres parameters: $1 = '605717914809373307', $2 = 'symbol', $3 = 'AUDJPY', $4 = 'GBPJPY', $5 = 'NZDJPY', $6 = 'CADJPY', $7 = 'CHFJPY', $8 = 'EURJPY', $9 = 'GBPAUD', $10 = 'GBPNZD', $11 = 'EURAUD', $12 = 'EURNZD', $13 = 'GBPCAD', $14 = 'EURCAD', $15 = 'CADCHF', $16 = 'EURGBP', $17 = 'EURCHF', $18 = 'GBPCHF', $19 = 'EURNZD', $20 = 'CADJPY', $21 = 'GBPJPY', $22 = 'AUDCHF', $23 = 'GBPCAD', $24 = 'NZDCAD', $25 = 'AUDJPY', $26 = 'CHFJPY', $27 = 'NZDUSD', $28 = 'EURCAD', $29 = 'NZDCHF', $30 = 'USDSGD', $31 = 'GBPAUD', $32 = 'USDSGD', $33 = 'AUDNZD', $34 = 'GBPNZD', $35 = 'AUDCAD', $36 = 'EURAUD', $37 = 'NZDCAD', $38 = 'NZDJPY', $39 = 'NZDUSD', $40 = 'AUDCAD', $41 = 'GBPCHF', $42 = 'EURJPY', $43 = 'AUDCHF', $44 = 'EURCHF', $45 = 'AUDNZD', $46 = 'CADCHF', $47 = 'NZDCHF', $48 = 'EURGBP', $49 = '605717914809373307', $50 = 'symbol', $51 = 'AUDJPY', $52 = 'GBPJPY', $53 = 'NZDJPY', $54 = 'CADJPY', $55 = 'CHFJPY', $56 = 'EURJPY', $57 = 'GBPAUD', $58 = 'GBPNZD', $59 = 'EURAUD', $60 = 'EURNZD', $61 = 'GBPCAD', $62 = 'EURCAD', $63 = 'CADCHF', $64 = 'EURGBP', $65 = 'EURCHF', $66 = 'GBPCHF', $67 = 'EURNZD', $68 = 'CADJPY', $69 = 'GBPJPY', $70 = 'AUDCHF', $71 = 'GBPCAD', $72 = 'NZDCAD', $73 = 'AUDJPY', $74 = 'CHFJPY', $75 = 'NZDUSD', $76 = 'EURCAD', $77 = 'NZDCHF', $78 = 'USDSGD', $79 = 'GBPAUD', $80 = 'USDSGD', $81 = 'AUDNZD', $82 = 'GBPNZD', $83 = 'AUDCAD', $84 = 'EURAUD', $85 = 'NZDCAD', $86 = 'NZDJPY', $87 = 'NZDUSD', $88 = 'AUDCAD', $89 = 'GBPCHF', $90 = 'EURJPY', $91 = 'AUDCHF', $92 = 'EURCHF', $93 = 'AUDNZD', $94 = 'CADCHF', $95 = 'NZDCHF', $96 = 'EURGBP'
-
select category, ;
Date: 2026-01-07 16:31:02 Duration: 1ms Database: postgres parameters: $1 = '515852059324736307', $2 = 'symbol', $3 = 'USDMXN', $4 = 'CHFZAR', $5 = 'AUDJPY', $6 = 'CADJPY', $7 = 'USDZAR', $8 = 'ZARJPY', $9 = 'CHFJPY', $10 = 'NZDJPY', $11 = 'USDJPY', $12 = 'TRYJPY', $13 = 'AUDZAR', $14 = 'USDHUF', $15 = 'GBPZAR', $16 = 'EURMXN', $17 = 'GBPJPY', $18 = 'EURNOK', $19 = 'EURCNH', $20 = 'USDCZK', $21 = 'CHFHUF', $22 = 'SGDJPY', $23 = 'EURHKD', $24 = 'EURJPY', $25 = 'USDNOK', $26 = 'EURZAR', $27 = 'EURSEK', $28 = 'USDDKK', $29 = 'NZDSEK', $30 = 'USDSEK', $31 = 'EURTRY', $32 = 'EURHUF', $33 = 'USDPLN', $34 = 'EURCZK', $35 = 'USDCNH', $36 = 'GBPNZD', $37 = 'EURPLN', $38 = 'USDILS', $39 = 'EURNZD', $40 = 'EURCZK', $41 = 'TRYJPY', $42 = 'EURGBP', $43 = 'GBPAUD', $44 = 'CHFHUF', $45 = 'EURHUF', $46 = 'EURAUD', $47 = 'GBPCAD', $48 = 'ZARJPY', $49 = 'USDZAR', $50 = 'EURCAD', $51 = 'GBPCAD', $52 = 'USDTRY', $53 = '515852059324736307', $54 = 'symbol', $55 = 'USDMXN', $56 = 'CHFZAR', $57 = 'AUDJPY', $58 = 'CADJPY', $59 = 'USDZAR', $60 = 'ZARJPY', $61 = 'CHFJPY', $62 = 'NZDJPY', $63 = 'USDJPY', $64 = 'TRYJPY', $65 = 'AUDZAR', $66 = 'USDHUF', $67 = 'GBPZAR', $68 = 'EURMXN', $69 = 'GBPJPY', $70 = 'EURNOK', $71 = 'EURCNH', $72 = 'USDCZK', $73 = 'CHFHUF', $74 = 'SGDJPY', $75 = 'EURHKD', $76 = 'EURJPY', $77 = 'USDNOK', $78 = 'EURZAR', $79 = 'EURSEK', $80 = 'USDDKK', $81 = 'NZDSEK', $82 = 'USDSEK', $83 = 'EURTRY', $84 = 'EURHUF', $85 = 'USDPLN', $86 = 'EURCZK', $87 = 'USDCNH', $88 = 'GBPNZD', $89 = 'EURPLN', $90 = 'USDILS', $91 = 'EURNZD', $92 = 'EURCZK', $93 = 'TRYJPY', $94 = 'EURGBP', $95 = 'GBPAUD', $96 = 'CHFHUF', $97 = 'EURHUF', $98 = 'EURAUD', $99 = 'GBPCAD', $100 = 'ZARJPY', $101 = 'USDZAR', $102 = 'EURCAD', $103 = 'GBPCAD', $104 = 'USDTRY'
15 107ms 1,080 0ms 0ms 0ms INSERT INTO T240 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;Times Reported Time consuming bind #15
Day Hour Count Duration Avg duration 16 1,080 107ms 0ms -
INSERT INTO T240 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:01:58 Duration: 0ms Database: postgres parameters: $1 = '2026-01-07 12:00:00', $2 = '452.29', $3 = '454.06', $4 = '450.44', $5 = '450.49', $6 = '1408', $7 = '515840233328993300', $8 = '0', $9 = '2026-01-07 16:01:58.553', $10 = '2026-01-07 16:01:58.552', $11 = '452.29', $12 = '454.06', $13 = '450.44', $14 = '450.49', $15 = '1408', $16 = '0', $17 = '2026-01-07 16:01:58.553', $18 = '2026-01-07 16:01:58.552'
-
INSERT INTO T240 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:01:14 Duration: 0ms Database: postgres parameters: $1 = '2026-01-07 12:00:00', $2 = '25751.75', $3 = '25793.75', $4 = '25728.25', $5 = '25788.75', $6 = '22642', $7 = '515840230516312300', $8 = '0', $9 = '2026-01-07 16:01:14.722', $10 = '2026-01-07 16:01:14.722', $11 = '25751.75', $12 = '25793.75', $13 = '25728.25', $14 = '25788.75', $15 = '22642', $16 = '0', $17 = '2026-01-07 16:01:14.723', $18 = '2026-01-07 16:01:14.722'
-
INSERT INTO T240 (pricedatetime, open, high, low, close, volume, symbolid, bsf, sastdatetimewritten, sastdatetimereceived) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) ON CONFLICT (pricedatetime, symbolid) DO UPDATE SET open = $11, high = $12, low = $13, close = $14, volume = $15, bsf = $16, sastdatetimewritten = $17, sastdatetimereceived = $18;
Date: 2026-01-07 16:00:58 Duration: 0ms Database: postgres parameters: $1 = '2026-01-07 12:00:00', $2 = '1.28145', $3 = '1.28146', $4 = '1.28034', $5 = '1.28051', $6 = '10639', $7 = '515840243942278300', $8 = '0', $9 = '2026-01-07 16:00:58.082', $10 = '2026-01-07 16:00:58.081', $11 = '1.28145', $12 = '1.28146', $13 = '1.28034', $14 = '1.28051', $15 = '10639', $16 = '0', $17 = '2026-01-07 16:00:58.082', $18 = '2026-01-07 16:00:58.081'
16 96ms 248 0ms 0ms 0ms SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME, CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema' WHEN true THEN CASE WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TABLE' WHEN 'v' THEN 'SYSTEM VIEW' WHEN 'i' THEN 'SYSTEM INDEX' ELSE NULL END WHEN n.nspname = 'pg_toast' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TOAST TABLE' WHEN 'i' THEN 'SYSTEM TOAST INDEX' ELSE NULL END ELSE CASE c.relkind WHEN 'r' THEN 'TEMPORARY TABLE' WHEN 'p' THEN 'TEMPORARY TABLE' WHEN 'i' THEN 'TEMPORARY INDEX' WHEN 'S' THEN 'TEMPORARY SEQUENCE' WHEN 'v' THEN 'TEMPORARY VIEW' ELSE NULL END END WHEN false THEN CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'p' THEN 'PARTITIONED TABLE' WHEN 'i' THEN 'INDEX' WHEN 'S' THEN 'SEQUENCE' WHEN 'v' THEN 'VIEW' WHEN 'c' THEN 'TYPE' WHEN 'f' THEN 'FOREIGN TABLE' WHEN 'm' THEN 'MATERIALIZED VIEW' ELSE NULL END ELSE NULL END AS TABLE_TYPE, d.description AS REMARKS, '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0) LEFT JOIN pg_catalog.pg_class dc ON (d.classoid = dc.oid AND dc.relname = 'pg_class') LEFT JOIN pg_catalog.pg_namespace dn ON (dn.oid = dc.relnamespace AND dn.nspname = 'pg_catalog') WHERE c.relnamespace = n.oid AND c.relname LIKE 'PROBABLYNOT' AND (false OR (c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema')) ORDER BY TABLE_TYPE, TABLE_SCHEM, TABLE_NAME;Times Reported Time consuming bind #16
Day Hour Count Duration Avg duration 16 248 96ms 0ms -
SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME, CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema' WHEN true THEN CASE WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TABLE' WHEN 'v' THEN 'SYSTEM VIEW' WHEN 'i' THEN 'SYSTEM INDEX' ELSE NULL END WHEN n.nspname = 'pg_toast' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TOAST TABLE' WHEN 'i' THEN 'SYSTEM TOAST INDEX' ELSE NULL END ELSE CASE c.relkind WHEN 'r' THEN 'TEMPORARY TABLE' WHEN 'p' THEN 'TEMPORARY TABLE' WHEN 'i' THEN 'TEMPORARY INDEX' WHEN 'S' THEN 'TEMPORARY SEQUENCE' WHEN 'v' THEN 'TEMPORARY VIEW' ELSE NULL END END WHEN false THEN CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'p' THEN 'PARTITIONED TABLE' WHEN 'i' THEN 'INDEX' WHEN 'S' THEN 'SEQUENCE' WHEN 'v' THEN 'VIEW' WHEN 'c' THEN 'TYPE' WHEN 'f' THEN 'FOREIGN TABLE' WHEN 'm' THEN 'MATERIALIZED VIEW' ELSE NULL END ELSE NULL END AS TABLE_TYPE, d.description AS REMARKS, '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0) LEFT JOIN pg_catalog.pg_class dc ON (d.classoid = dc.oid AND dc.relname = 'pg_class') LEFT JOIN pg_catalog.pg_namespace dn ON (dn.oid = dc.relnamespace AND dn.nspname = 'pg_catalog') WHERE c.relnamespace = n.oid AND c.relname LIKE 'PROBABLYNOT' AND (false OR (c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema')) ORDER BY TABLE_TYPE, TABLE_SCHEM, TABLE_NAME;
Date: 2026-01-07 16:13:26 Duration: 0ms Database: postgres
-
SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME, CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema' WHEN true THEN CASE WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TABLE' WHEN 'v' THEN 'SYSTEM VIEW' WHEN 'i' THEN 'SYSTEM INDEX' ELSE NULL END WHEN n.nspname = 'pg_toast' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TOAST TABLE' WHEN 'i' THEN 'SYSTEM TOAST INDEX' ELSE NULL END ELSE CASE c.relkind WHEN 'r' THEN 'TEMPORARY TABLE' WHEN 'p' THEN 'TEMPORARY TABLE' WHEN 'i' THEN 'TEMPORARY INDEX' WHEN 'S' THEN 'TEMPORARY SEQUENCE' WHEN 'v' THEN 'TEMPORARY VIEW' ELSE NULL END END WHEN false THEN CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'p' THEN 'PARTITIONED TABLE' WHEN 'i' THEN 'INDEX' WHEN 'S' THEN 'SEQUENCE' WHEN 'v' THEN 'VIEW' WHEN 'c' THEN 'TYPE' WHEN 'f' THEN 'FOREIGN TABLE' WHEN 'm' THEN 'MATERIALIZED VIEW' ELSE NULL END ELSE NULL END AS TABLE_TYPE, d.description AS REMARKS, '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0) LEFT JOIN pg_catalog.pg_class dc ON (d.classoid = dc.oid AND dc.relname = 'pg_class') LEFT JOIN pg_catalog.pg_namespace dn ON (dn.oid = dc.relnamespace AND dn.nspname = 'pg_catalog') WHERE c.relnamespace = n.oid AND c.relname LIKE 'PROBABLYNOT' AND (false OR (c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema')) ORDER BY TABLE_TYPE, TABLE_SCHEM, TABLE_NAME;
Date: 2026-01-07 16:13:26 Duration: 0ms Database: postgres
-
SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, c.relname AS TABLE_NAME, CASE n.nspname ~ '^pg_' OR n.nspname = 'information_schema' WHEN true THEN CASE WHEN n.nspname = 'pg_catalog' OR n.nspname = 'information_schema' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TABLE' WHEN 'v' THEN 'SYSTEM VIEW' WHEN 'i' THEN 'SYSTEM INDEX' ELSE NULL END WHEN n.nspname = 'pg_toast' THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TOAST TABLE' WHEN 'i' THEN 'SYSTEM TOAST INDEX' ELSE NULL END ELSE CASE c.relkind WHEN 'r' THEN 'TEMPORARY TABLE' WHEN 'p' THEN 'TEMPORARY TABLE' WHEN 'i' THEN 'TEMPORARY INDEX' WHEN 'S' THEN 'TEMPORARY SEQUENCE' WHEN 'v' THEN 'TEMPORARY VIEW' ELSE NULL END END WHEN false THEN CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'p' THEN 'PARTITIONED TABLE' WHEN 'i' THEN 'INDEX' WHEN 'S' THEN 'SEQUENCE' WHEN 'v' THEN 'VIEW' WHEN 'c' THEN 'TYPE' WHEN 'f' THEN 'FOREIGN TABLE' WHEN 'm' THEN 'MATERIALIZED VIEW' ELSE NULL END ELSE NULL END AS TABLE_TYPE, d.description AS REMARKS, '' as TYPE_CAT, '' as TYPE_SCHEM, '' as TYPE_NAME, '' AS SELF_REFERENCING_COL_NAME, '' AS REF_GENERATION FROM pg_catalog.pg_namespace n, pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_description d ON (c.oid = d.objoid AND d.objsubid = 0) LEFT JOIN pg_catalog.pg_class dc ON (d.classoid = dc.oid AND dc.relname = 'pg_class') LEFT JOIN pg_catalog.pg_namespace dn ON (dn.oid = dc.relnamespace AND dn.nspname = 'pg_catalog') WHERE c.relnamespace = n.oid AND c.relname LIKE 'PROBABLYNOT' AND (false OR (c.relkind = 'r' AND n.nspname !~ '^pg_' AND n.nspname <> 'information_schema')) ORDER BY TABLE_TYPE, TABLE_SCHEM, TABLE_NAME;
Date: 2026-01-07 16:13:26 Duration: 0ms Database: postgres
17 89ms 128 0ms 0ms 0ms SELECT absolutetimezoneoffset;Times Reported Time consuming bind #17
Day Hour Count Duration Avg duration 16 128 89ms 0ms -
SELECT absolutetimezoneoffset;
Date: 2026-01-07 16:41:55 Duration: 0ms Database: postgres parameters: $1 = '972', $2 = 'Forex Majors'
-
SELECT absolutetimezoneoffset;
Date: 2026-01-07 16:41:55 Duration: 0ms Database: postgres parameters: $1 = '972', $2 = 'Indices'
-
SELECT absolutetimezoneoffset;
Date: 2026-01-07 16:31:28 Duration: 0ms Database: postgres parameters: $1 = '914', $2 = 'Crypto'
18 80ms 94 0ms 1ms 0ms SELECT timegranularity FROM brokersymbollist bsl INNER JOIN symbols s ON bsl.symbolid = s.symbolid INNER JOIN downloadersymbolsettings dss on s.symbolid = dss.symbolid LEFT OUTER JOIN brokerinstrumentmapping bdfi ON bdfi.brokerid = $1 AND dss.datafeedinstrumentid = bdfi.datafeedinstrumentid WHERE s.nonliquid = 0 and s.deleted = 0 and dss.enabled = 1 AND s.symbol ILIKE $2 AND bsl.brokerid = $3 AND timegranularity >= 15 ORDER BY timegranularity LIMIT 1;Times Reported Time consuming bind #18
Day Hour Count Duration Avg duration 16 94 80ms 0ms -
SELECT timegranularity FROM brokersymbollist bsl INNER JOIN symbols s ON bsl.symbolid = s.symbolid INNER JOIN downloadersymbolsettings dss on s.symbolid = dss.symbolid LEFT OUTER JOIN brokerinstrumentmapping bdfi ON bdfi.brokerid = $1 AND dss.datafeedinstrumentid = bdfi.datafeedinstrumentid WHERE s.nonliquid = 0 and s.deleted = 0 and dss.enabled = 1 AND s.symbol ILIKE $2 AND bsl.brokerid = $3 AND timegranularity >= 15 ORDER BY timegranularity LIMIT 1;
Date: 2026-01-07 16:31:55 Duration: 1ms Database: postgres parameters: $1 = '621', $2 = 'EURJPY', $3 = '621'
-
SELECT timegranularity FROM brokersymbollist bsl INNER JOIN symbols s ON bsl.symbolid = s.symbolid INNER JOIN downloadersymbolsettings dss on s.symbolid = dss.symbolid LEFT OUTER JOIN brokerinstrumentmapping bdfi ON bdfi.brokerid = $1 AND dss.datafeedinstrumentid = bdfi.datafeedinstrumentid WHERE s.nonliquid = 0 and s.deleted = 0 and dss.enabled = 1 AND s.symbol ILIKE $2 AND bsl.brokerid = $3 AND timegranularity >= 15 ORDER BY timegranularity LIMIT 1;
Date: 2026-01-07 16:06:11 Duration: 1ms Database: postgres parameters: $1 = '558', $2 = 'GBPUSD', $3 = '558'
-
SELECT timegranularity FROM brokersymbollist bsl INNER JOIN symbols s ON bsl.symbolid = s.symbolid INNER JOIN downloadersymbolsettings dss on s.symbolid = dss.symbolid LEFT OUTER JOIN brokerinstrumentmapping bdfi ON bdfi.brokerid = $1 AND dss.datafeedinstrumentid = bdfi.datafeedinstrumentid WHERE s.nonliquid = 0 and s.deleted = 0 and dss.enabled = 1 AND s.symbol ILIKE $2 AND bsl.brokerid = $3 AND timegranularity >= 15 ORDER BY timegranularity LIMIT 1;
Date: 2026-01-07 16:21:28 Duration: 1ms Database: postgres parameters: $1 = '558', $2 = 'US500', $3 = '558'
19 70ms 15 3ms 6ms 4ms SELECT DISTINCT ON (basegroupname, symbol) ;Times Reported Time consuming bind #19
Day Hour Count Duration Avg duration 16 15 70ms 4ms -
SELECT DISTINCT ON (basegroupname, symbol) ;
Date: 2026-01-07 16:12:57 Duration: 6ms Database: postgres parameters: $1 = '489', $2 = '489'
-
SELECT DISTINCT ON (basegroupname, symbol) ;
Date: 2026-01-07 16:11:26 Duration: 5ms Database: postgres parameters: $1 = '689', $2 = '689'
-
SELECT DISTINCT ON (basegroupname, symbol) ;
Date: 2026-01-07 16:06:02 Duration: 5ms Database: postgres parameters: $1 = '627', $2 = '627'
20 49ms 1 49ms 49ms 49ms with maxwhid as ( ;Times Reported Time consuming bind #20
Day Hour Count Duration Avg duration 16 1 49ms 49ms -
with maxwhid as ( ;
Date: 2026-01-07 16:11:46 Duration: 49ms Database: postgres parameters: $1 = '335', $2 = '621', $3 = '637', $4 = '642', $5 = '666', $6 = '660', $7 = '643', $8 = '630', $9 = '680', $10 = '641', $11 = '431', $12 = '622', $13 = '489', $14 = '529', $15 = '576', $16 = '665', $17 = '667', $18 = '558', $19 = '620', $20 = '125', $21 = '488', $22 = '567', $23 = '689', $24 = '700', $25 = '758', $26 = '763', $27 = '765', $28 = '817', $29 = '914', $30 = '972'
-
Events
Log levels
Key values
- 961,430 Log entries
Events distribution
Key values
- 0 PANIC entries
- 0 FATAL entries
- 0 ERROR entries
- 1 WARNING entries
Most Frequent Errors/Events
Key values
- 1 Max number of times the same event was reported
- 1 Total events found
Rank Times reported Error 1 1 WARNING: is not a PostgreSQL server process
Times Reported Most Frequent Error / Event #1
Day Hour Count Jan 07 16 1 - WARNING: PID 28016 is not a PostgreSQL server process
Date: 2026-01-07 16:33:13