-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathfeed.xml
285 lines (228 loc) · 17.6 KB
/
feed.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>siebel monitoring</title>
<description>Lets see how it goes.
</description>
<link>http://siebelmonitoring.com/</link>
<atom:link href="http://siebelmonitoring.com/feed.xml" rel="self" type="application/rss+xml" />
<pubDate>Thu, 25 Sep 2014 15:56:30 +0200</pubDate>
<lastBuildDate>Thu, 25 Sep 2014 15:56:30 +0200</lastBuildDate>
<generator>Jekyll v2.3.0</generator>
<item>
<title>Find badly performing SQLs in Siebel</title>
<description><p>Situation: your business customers complaining on Siebel bad performance.
“Search is slow, landing page is slow, particular list applet opens slowly etc.”</p>
<p>There are many things to check in troubleshooting issues like this, but #1 candidate
to check is your application’s database layer. As You all know, Siebel generates
SQL statements to execute on the fly, so there is no fixed set of predefined
statements which you could check and tune in advance.
Modern database engines do lot of tricky things to ensure any and all your SQL
statements are performing well. But nothing is ideal and sometimes you
need to get your hands dirty to return application’s performance back on rails.</p>
<p>So, how can we check Siebel SQL statement performance metrics?
One way is to do it from database side. Both Oracle and MSSQL provide
functionality to record execution plans and performance metrics.
This is powerful and effective way to attack that issue, especially
if there is good DBA to assist You, or maybe you the DBA yourself.</p>
<p>If You are not the DBA and there are no DBA around to help you,
you still can find Siebel generated SQL statement and their basic execution metrics.
Yes, I’m talking about Siebel logs.
In order to proceed with this approach you need access to manipulate
event log levels and access to log directory on the Siebel server.
There is enough information around on how-to enable SQL logging,
start with Oracle’s <a href="http://goo.gl/eIas5L">About Using SQL Tagging to Trace Long-Running Queries in Siebel Business Applications</a></p>
<p>You are really lucky if there is proper testing environment where
performance problems could be reproduced in controlled manner.
It means you will get just a bunch of log files which one
can parse manually and pin point bad SQL easily.</p>
<p>But often problems are environment specific and not reproducible
on lower environments (either due to different data sets, or database
settings, or absence of means to reproduce specific load situations).
In that case there is nothing left but to collect logs
directly on problematic environment. That activity could end with
hundreds of gigabytes of log data even on mid size installations.</p>
<p>And finally we are here - millions lines of log data and no time at all.
You need some automation to parse the logs and find those bad SQLs.</p>
<p>Provided <a href="https://gist.github.com/corvax19/4d8b9a9661e1dcd775d3">script</a> is writen in AWK,
which should be available on virtually any UNIX/LINUX box.
Run “SQLextract help” to read built-in help.
Let me explain main usage scenarious.</p>
<p>Quick scan logs and find all SQL statements:</p>
<pre><code>mcp: corvax$ ./SQLextract yourObjMgr_*
{"statement":"SELECT","SQLID"="1129D660","md5":"738118fffb6904149436c95dcba502f0","ts":"2014-08-25 00:55:07","sec":"0.001","file":"yourObjMgr_enu_0744_780141766.log","fromLine":5190,"toLine":5213}
{"statement":"SELECT","SQLID"="112EC500","md5":"738118fffb6904149436c95dcba502f0","ts":"2014-08-25 00:55:07","sec":"0.001","file":"yourObjMgr_enu_0744_780141766.log","fromLine":5230,"toLine":5253}
{"statement":"SELECT","SQLID"="11263900","md5":"8a7f096ac44ddae8ee00adf677af7838","ts":"2014-08-25 00:55:09","sec":"0.147","file":"yourObjMgr_enu_0744_780141766.log","fromLine":5420,"toLine":5718}
...
</code></pre>
<p>As you can see, tool reports in JSON. </p>
<p>Field meanings:</p>
<ul>
<li>statement - SELECT or INSERT or UPDATE;</li>
<li>SQLID - ID of the statement as it is refenced in the logfile;</li>
<li>md5 - md5 hash of the statement, so you can recognize them while aggregating;</li>
<li>ts - timestamp;</li>
<li>sec - execution time of the statement;</li>
<li>file - source log file where that particular statement found;</li>
<li>fromLine,toLine - location of the statement in the log file;</li>
</ul>
<p>Quick scan logs and find all SQL statements with execution time longer than specified (in sec):</p>
<pre><code>mcp: corvax$ ./SQLextract -v exectime=10 yourObjMgr_*
{"statement":"SELECT","SQLID"="1106B190","md5":"73c4bfeedfea4d4a21380aac57897b35","ts":"2014-08-25 06:13:43","sec":"238.040","file":"yourObjMgr_enu_0744_780141955.log","fromLine":2727,"toLine":3101}
{"statement":"SELECT","SQLID"="20CC27A0","md5":"78083ca2372c10d9f568af09956edd57","ts":"2014-08-25 08:38:23","sec":"32.971","file":"yourObjMgr_enu_0744_780141994.log","fromLine":66777,"toLine":66981}
{"statement":"SELECT","SQLID"="1D596960","md5":"8a7f096ac44ddae8ee00adf677af7838","ts":"2014-08-25 09:04:08","sec":"13.882","file":"yourObjMgr_enu_0744_780141994.log","fromLine":112240,"toLine":112538}
...
</code></pre>
<p>Previous two examples are scans only.
Script also could extract detected statements into separate files, providing
all the same metadata (md5, sec, sourcefile etc.) in the filenames.
Here is how it looks like.</p>
<p>Initiall state:</p>
<pre><code>mcp:slow corvax$ ll
total 76440
drwxr-xr-x 7 corvax staff 238 Sep 25 12:37 .
drwxr-xr-x 5 corvax staff 170 Sep 25 12:36 ..
-rwxr-xr-x 1 corvax staff 4336 Sep 25 12:35 SQLextract
-rw------- 1 corvax staff 21596537 Sep 25 12:34 yourObjMgr_enu_0744_780141955.log
-rw------- 1 corvax staff 14545580 Sep 25 12:34 yourObjMgr_enu_0744_780141994.log
-rw------- 1 corvax staff 534020 Sep 25 12:34 yourObjMgr_enu_0744_780142000.log
-rw------- 1 corvax staff 2441813 Sep 25 12:34 yourObjMgr_enu_0744_780142036.log
</code></pre>
<p>Now lets detect and extract into the same location all SQLs with execution time above 10 sec:</p>
<pre><code>mcp:slow corvax$ ./SQLextract -v exectime=10 dump=. *.log
{"statement":"SELECT","SQLID"="1106B190","md5":"73c4bfeedfea4d4a21380aac57897b35","ts":"2014-08-25 06:13:43","sec":"238.040","file":"yourObjMgr_enu_0744_780141955.log","fromLine":2727,"toLine":3101}
{"statement":"SELECT","SQLID"="20CC27A0","md5":"78083ca2372c10d9f568af09956edd57","ts":"2014-08-25 08:38:23","sec":"32.971","file":"yourObjMgr_enu_0744_780141994.log","fromLine":66777,"toLine":66981}
{"statement":"SELECT","SQLID"="1D596960","md5":"8a7f096ac44ddae8ee00adf677af7838","ts":"2014-08-25 09:04:08","sec":"13.882","file":"yourObjMgr_enu_0744_780141994.log","fromLine":112240,"toLine":112538}
{"statement":"SELECT","SQLID"="20B804C0","md5":"1bbcd97145326211188a7b84ad40c7d0","ts":"2014-08-25 10:28:33","sec":"21.660","file":"yourObjMgr_enu_0744_780141994.log","fromLine":292337,"toLine":292636}
...
</code></pre>
<p>When its done, you have following:</p>
<pre><code>mcp:slow corvax$ ll
total 76600
drwxr-xr-x 14 corvax staff 476 Sep 25 12:38 .
drwxr-xr-x 5 corvax staff 170 Sep 25 12:36 ..
-rw-r--r-- 1 corvax staff 11753 Sep 25 12:38 _yourObjMgr_enu_0744_780141955.log_73c4bfeedfea4d4a21380aac57897b35_lines_2727-3101_id1106B190_238.040_sec.sqlperf
-rw-r--r-- 1 corvax staff 8429 Sep 25 12:38 _yourObjMgr_enu_0744_780141994.log_1bbcd97145326211188a7b84ad40c7d0_lines_292337-292636_id20B804C0_21.660_sec.sqlperf
-rw-r--r-- 1 corvax staff 8424 Sep 25 12:38 _yourObjMgr_enu_0744_780141994.log_1bbcd97145326211188a7b84ad40c7d0_lines_484457-484756_id20B9BCE0_33.630_sec.sqlperf
-rw-r--r-- 1 corvax staff 5590 Sep 25 12:38 _yourObjMgr_enu_0744_780141994.log_78083ca2372c10d9f568af09956edd57_lines_66777-66981_id20CC27A0_32.971_sec.sqlperf
...
-rwxr-xr-x 1 corvax staff 4336 Sep 25 12:35 sqlextract.awk
-rw------- 1 corvax staff 21596537 Sep 25 12:34 yourObjMgr_enu_0744_780141955.log
-rw------- 1 corvax staff 14545580 Sep 25 12:34 yourObjMgr_enu_0744_780141994.log
-rw------- 1 corvax staff 534020 Sep 25 12:34 yourObjMgr_enu_0744_780142000.log
-rw------- 1 corvax staff 2441813 Sep 25 12:34 yourObjMgr_enu_0744_780142036.log
</code></pre>
<p>Even with that nice automation you can end with hundreds of bad SQL reports and files.</p>
<p>Which one should you start with?</p>
<p>And here comes last feature - buildin statistics.
At the end of any script invocation there are three TOP5 report:</p>
<ul>
<li>most frequent statements</li>
<li>slowest statements</li>
<li>biggest time consumers, which is counter as accumulated
execution time per each unique (by md5) statement.</li>
</ul>
<p>This is how it looks like:</p>
<pre><code>TOP5 most frequent statements:
MD5: 1bbcd97145326211188a7b84ad40c7d0 count: 3
MD5: 8a7f096ac44ddae8ee00adf677af7838 count: 2
MD5: 73c4bfeedfea4d4a21380aac57897b35 count: 1
MD5: 78083ca2372c10d9f568af09956edd57 count: 1
MD5: 78083ca2372c10d9f568af09956edd57 count: 0
TOP5 slowest statements:
MD5: 73c4bfeedfea4d4a21380aac57897b35 count: 238.040
MD5: 8a7f096ac44ddae8ee00adf677af7838 count: 202.311
MD5: 1bbcd97145326211188a7b84ad40c7d0 count: 68.314
MD5: 78083ca2372c10d9f568af09956edd57 count: 32.971
MD5: 78083ca2372c10d9f568af09956edd57 count: 0
TOP5 time consuming statements:
MD5: 73c4bfeedfea4d4a21380aac57897b35 count: 238.04
MD5: 8a7f096ac44ddae8ee00adf677af7838 count: 216.193
MD5: 1bbcd97145326211188a7b84ad40c7d0 count: 123.604
MD5: 78083ca2372c10d9f568af09956edd57 count: 32.971
MD5: 78083ca2372c10d9f568af09956edd57 count: 0
</code></pre>
<p>If you had extracted statements into separate files already,
just find the right one (again by md5) and start working on it.</p>
<p>Happy performance troubleshooting!</p>
</description>
<pubDate>Thu, 25 Sep 2014 00:00:00 +0200</pubDate>
<link>http://siebelmonitoring.com/2014/09/25/find-badly-performing-sql-in-siebel.html</link>
<guid isPermaLink="true">http://siebelmonitoring.com/2014/09/25/find-badly-performing-sql-in-siebel.html</guid>
<category>siebel</category>
<category>SQL</category>
<category>performance</category>
<category>log</category>
<category>analysis</category>
</item>
<item>
<title>Siebel component's memory usage</title>
<description><p>Sometimes, especially if there are issues with component stability and/or performance,
it is vital to know how much memory is consumed by particular Siebel component.
Also having historical records of memory usage is a help for capacity planning,
component configuration adjustments, as well as quick and dirty way to identify
memory leaks.</p>
<p>Depending on your component’s configuration one or more OS processes would be started.
It is no big deal to find out how much memory is taken by running process.
But unfortunately it’s not that easy to understand who-is-who between those siebel processes,
as they share binaries and only differ by the arguments, which are pretty cryptic
and provide no means (at least known to me) to find out which component this process implements.</p>
<p>One possible source of that PID&lt;-&gt;Component info could be log of the siebel root process.
Here is a sample of that log:</p>
<pre>
ServerLog ProcessCreate 1 0000118453c80ead:0 2014-07-18 00:09:08 Created multithreaded server process (OS pid = 3763 ) for SRBroker
ServerLog ProcessCreate 1 0000118453c80ead:0 2014-07-18 00:09:08 Created server process (OS pid = 3764 ) for SCBroker
ServerLog ProcessCreate 1 0000118453c80ead:0 2014-07-18 00:09:13 Created server process (OS pid = 3765 ) for SvrTaskPersist
ServerLog ProcessCreate 1 0000118453c80ead:0 2014-07-18 00:09:13 Created multithreaded server process (OS pid = 3773 ) for FSMSrvr
ServerLog ProcessCreate 1 0000118453c80ead:0 2014-07-18 00:09:13 Created multithreaded server process (OS pid = 3774 ) for SRProc
ServerLog ProcessCreate 1 0000118453c80ead:0 2014-07-18 00:09:44 Created multithreaded server process (OS pid = 3779 ) for WfProcBatchMgr
ServerLog ProcessCreate 1 0000118453c80ead:0 2014-07-18 00:09:44 Created multithreaded server process (OS pid = 3907 ) for CommInboundRcvr
ServerLog ProcessCreate 1 0000118453c80ead:0 2014-07-18 00:09:44 Created multithreaded server process (OS pid = 3915 ) for CommOutboundMgr
ServerLog ProcessCreate 1 0000118453c80ead:0 2014-07-18 00:09:44 Created server process (OS pid = 3923 ) for eCommunicationsObjMgr_enu
ServerLog ProcessCreate 1 0000118453c80ead:0 2014-08-19 00:47:00 Created server process (OS pid = 10773 ) for EAIObjMgr_enu
ServerLog ProcessCreate 1 0000118453c80ead:0 2014-08-19 00:52:06 Created server process (OS pid = 11744 ) for SrvrSched
</pre>
<p><a href="https://gist.github.com/corvax19/f31ecf8a033e09bc1096">Here</a> is the awk script which does exactly that:
lists your running siebel processes, correlates those with Siebel components and counts memory usage metrics (separatelly RSS and VSS)
by each Siebel component. Delimited output is similar to this:</p>
<pre>
|ts|host|enterprise|component|PROC_COUNT|SZ_BYTES|VSZ_BYTES
1409142857|server1|SIEBENT1|SCBroker|1|6766592|7045120
1409142857|server1|SIEBENT1|eCommunicationsObjMgr_enu|1|334606336|336183296
1409142857|server1|SIEBENT1|FSMSrvr|1|99758080|105218048
1409142857|server1|SIEBENT1|CommOutboundMgr|1|221429760|224821248
1409142857|server1|SIEBENT1|WfProcMgr|5|3914874880|3940581376
1409142857|server1|SIEBENT1|EAIObjMgr_enu|1|951222272|952614912
1409142857|server1|SIEBENT1|SRProc|1|124665856|126287872
1409142857|server1|SIEBENT1|SvrTaskPersist|1|239779840|240566272
1409142857|server1|SIEBENT1|SRBroker|1|68911104|70696960
1409142857|server1|SIEBENT1|eCommunicationsObjMgr_enu|6|8282849280|8294023168
1409142857|server1|SIEBENT1|WfProcBatchMgr|5|1289453568|1302757376
1409142857|server1|SIEBENT1|CommInboundRcvr|1|181596160|192397312
1409142857|server1|SIEBENT1|CustomAppObjMgr_sve|1|16248832|17416192
...
1409142857|server1|SIEBENT1|TOTAL_ALL_COMP|29|17303388160|17386618880
</pre>
<p>First line is special - very first char is delimeter, then comes list of columns.
Column names in lower case form subject of monitoring - timestamp, host name, siebel enterprise and
component’s name. Column names in upper case are objects of monitoring - number of processes/MTS,
resident set size and virtual memory size (in bytes).
Last line also is special - totals by all components.</p>
<p>If you are brave and lazy, you can run it without any installation at all:</p>
<p><code>curl -Ls http://goo.gl/W62mWG|awk -f -</code></p>
</description>
<pubDate>Fri, 22 Aug 2014 00:00:00 +0200</pubDate>
<link>http://siebelmonitoring.com/2014/08/22/siebel-component-memory-usage.html</link>
<guid isPermaLink="true">http://siebelmonitoring.com/2014/08/22/siebel-component-memory-usage.html</guid>
<category>components</category>
<category>memory</category>
<category>awk</category>
<category>linux</category>
<category>unix</category>
<category>"memory</category>
<category>leak"</category>
<category>monitoring</category>
<category>siebel</category>
</item>
</channel>
</rss>