S N • R
EXPERIENCE
2019
Tools for solving perf issues
 @nseinlet •  @nseinlet •  @nseinlet •  nse@odoo.com
 Basic ones
1
 Python2
 PostgreSQL3
 Conclusion4
“Never a database with production grade datas on your laptop.
N E V E R
Some facts
 Using Linux computer
 Mainly command line
 Through SSH
 And obviously a web browser...
Most basic ones
1
Since 12.0
[options]
log_level = info
In the Odoo logs
2018-09-28 12:19:01,246 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:19:01] "GET /web HTTP/1.0" 200 - 873 0.429 1.593
2018-09-28 12:18:43,322 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:18:43] "GET /web HTTP/1.0" 200 - 1484 0.932 4.519
Number of queries
Time spent executing SQL queries
Time spent to process query, except SQL time
0 100 1000
0 0.1 3
0 1 5
https://github.com/odoo/odoo/blob/saas-11.5/odoo/netsvc.py#L90
Parse/sort them
Processing time per hour?
2018-09-28 12:19:01,246 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:19:01] "GET /web HTTP/1.0" 200 - 873 0.429 1.593
2018-09-28 12:18:43,322 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:18:43] "GET /web HTTP/1.0" 200 - 1484 0.932 4.519
cat odoo.log | grep -v longpolling | awk '{print $NF " " $0}' | sort -n | less
4.519 2018-09-28 12:18:43,322 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:18:43] "GET /web HTTP/1.0" 200 - 1484 0.932 4
1.593 2018-09-28 12:19:01,246 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:19:01] "GET /web HTTP/1.0" 200 - 873 0.429 1
cat odoo.log | grep -v longpolling | awk '{split($2,a,":"); arr[a[1]]+=$NF;arr2[a[1]]+=1;} END {for (i in arr) {print i,arr[i],arr2[i]}}
Get system infos
$ inxi -Fxz
 Python
2
For Python profiling details, check my last year talk1
Reproducing is the key2
Doing it manually is not accurate3
Let's work on a use case
Have the description of the issue1
Have the customer database/code running on an isolated server2
Reproduce the issue using a script3
Profile python with odoo.tools.misc.profile4
Log PostgreSQL queries5
Analyse and solve6
Do it again and again7
import openerplib
import datetime
connection = openerplib.get_connection(
hostname="myserver", port=443, database="stock", login="admin", password="admin_paasw
)
connection.check_login(force=False)
picking_model = connection.get_model("stock.picking")
sm_model = connection.get_model("stock.move")
product_model = connection.get_model("product.product")
for i in range(1, 100):
product_model.create({"name": "product %s" % i, "type": "product", "uom_id": 1})
lines = []
prod_ids = product_model.search([("type", "=", "product"), ("uom_id", "=", 1)])
for prod in prod_ids:
lines.append((0, 0, {"product_id": prod, "product_uom_qty": 1, "name": "product %s" %
print("Start picking : %s" % datetime.datetime.now().isoformat())
pick_id = picking_model.create(
{
"scheduled_date": "2017-09-10 11:30:29",
"move_type": "direct",
"picking_type_id": 7,
"location_dest_id": 9,
"picking_type_code": "outgoing",
"location_id": 26,
"move_lines": lines,
}
)
print("Picking created (%s lines) : %s" % (len(lines), datetime.datetime.now().isoformat
move_ids = sm_model.search([("picking_id", "=", pick_id)])
for move_id in move_ids:
sm_model.write(move_id, {"quantity_done": 1})
print("Picking lines done : %s" % datetime.datetime.now().isoformat())
picking_model.button_validate(pick_id)
print("Picking validated : %s" % datetime.datetime.now().isoformat())
M POS
Issue is a concurrency issue, which means we should call multiple workers at the same time, in a repeatable
way, to create pos orders or close pos sessions.
I've set the password as computable from user to ease the reproduction_protocol.
mport odoolib
mport sys
mport random
mport threading
rom datetime import datetime, date
ef selling(username, password, size):
[...]
ef closing(username, password):
connection = odoolib.get_connection(hostname="my_server", database="mydb
config_model = connection.get_model('pos.config')
session_model = connection.get_model('pos.session')
order_model = connection.get_model('pos.order')
# Search config
config_id = config_model.search([('name', 'ilike', pos_id)])
if len(config_id) > 1:
config_id = config_id[0]
# Open session
config_model.open_session_cb(config_id)
pos_config = config_model.read(config_id, ['current_session_id', 'journa
session_id = pos_config['current_session_id'][0]
order_ids = order_model.search([['session_id', '=', session_id]])
if __name__ == "__
if len(sys.arg
print("Usa
print("")
print("Ex
print("
print(" To
print("
else:
concurrenc
lst_thd =
for i in r
pos_id
pos_id
userna
passwo
if sys
si
if
th
else:
th
thd.st
t1 = datetime.now()
session_model.action_pos_session_closing_control(session_id)
t2 = datetime.now()
print("%s closed session %s in %s seconds (%s orders)" % (username, sess
lst_th
for thd in
thd.jo
S
On various workers, in various chunks.
import threading
#How many workers will I load?
max_connections = 48
semaphore = threading.BoundedSemaphore(max_connections)
lst_thd = []
def process_company(company_id, date_from, date_to):
try:
connection = odoolib.get_connection(hostname="load", database="odoo", login="adm
[...]
chunk_size = 2000
chunks_rent_ids = [rent_ids[x : x + chunk_size] for x in range(0, len(rent_ids),
[...]
finally:
semaphore.release()
if __name__ == "__main__":
connection = odoolib.get_connection(hostname="load", database="odoo", login="admin",
[...]
# Split the work
thd_to_launch = []
for p in to_process:
thd_to_launch.append((p[0], d_start.strftime("%Y-%m-%d"), d_end.strftime("%Y-%m-%
for ttl in thd_to_launch:
# Semaphore
semaphore.acquire()
thd = threading.Thread(target=process_company, args=ttl)
thd.start()
lst_thd.append(thd)
for thd in lst_thd:
thd.join()
 PostgreSQL
3
“Because a good presentation have PostgreSQL slides. --Self
pg_activity
I simply launch it with more rights
~/.psqlrc
set QUIET 1
pset null '¤'
set PROMPT1 '%[%033[1m%][%/] # '
-- SELECT * FROM<enter>. %R shows what type of input it expects.
set PROMPT2 '... > '
timing
x auto
set VERBOSITY verbose
set HISTFILE ~/.psql_history- :DBNAME
set HISTCONTROL ignoredups
set COMP_KEYWORD_CASE upper
setenv EDITOR 'vim'
unset QUIET
Parallel workers
Workers used for query
processing is based on a
geometric progression
with 3 as common ratio
and
min_parallel_relation_size
as scale factor.
Considering the 8MB of
default parameter:
Size Worker
<8 MB 0
<24 MB 1
<72 MB 2
<216 MB 3
<684 MB 4
<1944 MB 5
<5822 MB 6
... ...
Improve query speed @ CPU cost1
max_worker_processes2
max_parallel_workers3
system wide workers for parallel processing
<= max_worker_processes
max_parallel_workers_per_gather4
query wide workers for parallel processing
<= max_parallel_workers
Max 4 per our recommendation
WAL size growth
Five minutes later...  <10  >=10
pg_current_xlog_location pg_current_wal_lsn
pg_xlog_location_diff pg_wal_lsn_diff
[postgres] # SELECT pg_current_wal_insert_lsn();
pg_current_wal_insert_lsn
---------------------------
3D/B4020A58
(1 row)
Time: 13.237 ms
[postgres] # SELECT pg_current_wal_insert_lsn();
pg_current_wal_insert_lsn
---------------------------
3E/2203E0F8
(1 row)
Time: 7.137 ms
[postgres] # SELECT pg_wal_lsn_diff('3E/2203E0F8', '3D/B4020A58');
pg_wal_lsn_diff
-----------------
1845614240
(1 row)
Time: 1.041 ms
 Conclusions
5
Tools for solving performance issues
ce
P
rofile  An
alyse
Rep
rod

Im
prove
Not just a fix_it.py , sorry.1
Reproducing in a reliable way is the key.2
A load test can prevent this.3
N S • R
EXPERIENCE
2019
Thank you!
 @nseinlet •  @nseinlet •  @nseinlet •  nse@odoo.com
#odooexperience

Tools for Solving Performance Issues

  • 1.
    S N •R EXPERIENCE 2019 Tools for solving perf issues  @nseinlet •  @nseinlet •  @nseinlet •  [email protected]
  • 2.
  • 3.
    “Never a databasewith production grade datas on your laptop. N E V E R
  • 4.
    Some facts  UsingLinux computer  Mainly command line  Through SSH  And obviously a web browser...
  • 5.
  • 6.
    Since 12.0 [options] log_level =info In the Odoo logs 2018-09-28 12:19:01,246 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:19:01] "GET /web HTTP/1.0" 200 - 873 0.429 1.593 2018-09-28 12:18:43,322 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:18:43] "GET /web HTTP/1.0" 200 - 1484 0.932 4.519 Number of queries Time spent executing SQL queries Time spent to process query, except SQL time 0 100 1000 0 0.1 3 0 1 5 https://github.com/odoo/odoo/blob/saas-11.5/odoo/netsvc.py#L90
  • 7.
    Parse/sort them Processing timeper hour? 2018-09-28 12:19:01,246 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:19:01] "GET /web HTTP/1.0" 200 - 873 0.429 1.593 2018-09-28 12:18:43,322 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:18:43] "GET /web HTTP/1.0" 200 - 1484 0.932 4.519 cat odoo.log | grep -v longpolling | awk '{print $NF " " $0}' | sort -n | less 4.519 2018-09-28 12:18:43,322 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:18:43] "GET /web HTTP/1.0" 200 - 1484 0.932 4 1.593 2018-09-28 12:19:01,246 5283 INFO saas115nse werkzeug: 127.0.0.1 - - [28/Sep/2018 12:19:01] "GET /web HTTP/1.0" 200 - 873 0.429 1 cat odoo.log | grep -v longpolling | awk '{split($2,a,":"); arr[a[1]]+=$NF;arr2[a[1]]+=1;} END {for (i in arr) {print i,arr[i],arr2[i]}}
  • 8.
  • 9.
  • 10.
    For Python profilingdetails, check my last year talk1 Reproducing is the key2 Doing it manually is not accurate3
  • 11.
    Let's work ona use case Have the description of the issue1 Have the customer database/code running on an isolated server2 Reproduce the issue using a script3 Profile python with odoo.tools.misc.profile4 Log PostgreSQL queries5 Analyse and solve6 Do it again and again7
  • 12.
    import openerplib import datetime connection= openerplib.get_connection( hostname="myserver", port=443, database="stock", login="admin", password="admin_paasw ) connection.check_login(force=False) picking_model = connection.get_model("stock.picking") sm_model = connection.get_model("stock.move") product_model = connection.get_model("product.product") for i in range(1, 100): product_model.create({"name": "product %s" % i, "type": "product", "uom_id": 1}) lines = [] prod_ids = product_model.search([("type", "=", "product"), ("uom_id", "=", 1)]) for prod in prod_ids: lines.append((0, 0, {"product_id": prod, "product_uom_qty": 1, "name": "product %s" % print("Start picking : %s" % datetime.datetime.now().isoformat()) pick_id = picking_model.create( { "scheduled_date": "2017-09-10 11:30:29", "move_type": "direct", "picking_type_id": 7, "location_dest_id": 9, "picking_type_code": "outgoing", "location_id": 26, "move_lines": lines, } ) print("Picking created (%s lines) : %s" % (len(lines), datetime.datetime.now().isoformat
  • 13.
    move_ids = sm_model.search([("picking_id","=", pick_id)]) for move_id in move_ids: sm_model.write(move_id, {"quantity_done": 1}) print("Picking lines done : %s" % datetime.datetime.now().isoformat()) picking_model.button_validate(pick_id) print("Picking validated : %s" % datetime.datetime.now().isoformat())
  • 14.
    M POS Issue isa concurrency issue, which means we should call multiple workers at the same time, in a repeatable way, to create pos orders or close pos sessions. I've set the password as computable from user to ease the reproduction_protocol. mport odoolib mport sys mport random mport threading rom datetime import datetime, date ef selling(username, password, size): [...] ef closing(username, password): connection = odoolib.get_connection(hostname="my_server", database="mydb config_model = connection.get_model('pos.config') session_model = connection.get_model('pos.session') order_model = connection.get_model('pos.order') # Search config config_id = config_model.search([('name', 'ilike', pos_id)]) if len(config_id) > 1: config_id = config_id[0] # Open session config_model.open_session_cb(config_id) pos_config = config_model.read(config_id, ['current_session_id', 'journa session_id = pos_config['current_session_id'][0] order_ids = order_model.search([['session_id', '=', session_id]]) if __name__ == "__ if len(sys.arg print("Usa print("") print("Ex print(" print(" To print(" else: concurrenc lst_thd = for i in r pos_id pos_id userna passwo if sys si if th else: th thd.st
  • 15.
    t1 = datetime.now() session_model.action_pos_session_closing_control(session_id) t2= datetime.now() print("%s closed session %s in %s seconds (%s orders)" % (username, sess lst_th for thd in thd.jo
  • 16.
    S On various workers,in various chunks. import threading #How many workers will I load? max_connections = 48 semaphore = threading.BoundedSemaphore(max_connections) lst_thd = [] def process_company(company_id, date_from, date_to): try: connection = odoolib.get_connection(hostname="load", database="odoo", login="adm [...] chunk_size = 2000 chunks_rent_ids = [rent_ids[x : x + chunk_size] for x in range(0, len(rent_ids), [...] finally: semaphore.release() if __name__ == "__main__": connection = odoolib.get_connection(hostname="load", database="odoo", login="admin", [...] # Split the work thd_to_launch = [] for p in to_process: thd_to_launch.append((p[0], d_start.strftime("%Y-%m-%d"), d_end.strftime("%Y-%m-% for ttl in thd_to_launch: # Semaphore semaphore.acquire() thd = threading.Thread(target=process_company, args=ttl) thd.start()
  • 17.
    lst_thd.append(thd) for thd inlst_thd: thd.join()
  • 18.
  • 19.
    “Because a goodpresentation have PostgreSQL slides. --Self
  • 20.
    pg_activity I simply launchit with more rights
  • 21.
    ~/.psqlrc set QUIET 1 psetnull '¤' set PROMPT1 '%[%033[1m%][%/] # ' -- SELECT * FROM<enter>. %R shows what type of input it expects. set PROMPT2 '... > ' timing x auto set VERBOSITY verbose set HISTFILE ~/.psql_history- :DBNAME set HISTCONTROL ignoredups set COMP_KEYWORD_CASE upper setenv EDITOR 'vim' unset QUIET
  • 22.
    Parallel workers Workers usedfor query processing is based on a geometric progression with 3 as common ratio and min_parallel_relation_size as scale factor. Considering the 8MB of default parameter: Size Worker <8 MB 0 <24 MB 1 <72 MB 2 <216 MB 3 <684 MB 4 <1944 MB 5 <5822 MB 6 ... ... Improve query speed @ CPU cost1 max_worker_processes2 max_parallel_workers3 system wide workers for parallel processing <= max_worker_processes max_parallel_workers_per_gather4 query wide workers for parallel processing <= max_parallel_workers Max 4 per our recommendation
  • 23.
    WAL size growth Fiveminutes later...  <10  >=10 pg_current_xlog_location pg_current_wal_lsn pg_xlog_location_diff pg_wal_lsn_diff [postgres] # SELECT pg_current_wal_insert_lsn(); pg_current_wal_insert_lsn --------------------------- 3D/B4020A58 (1 row) Time: 13.237 ms [postgres] # SELECT pg_current_wal_insert_lsn(); pg_current_wal_insert_lsn --------------------------- 3E/2203E0F8 (1 row) Time: 7.137 ms [postgres] # SELECT pg_wal_lsn_diff('3E/2203E0F8', '3D/B4020A58'); pg_wal_lsn_diff ----------------- 1845614240 (1 row) Time: 1.041 ms
  • 24.
     Conclusions 5 Tools forsolving performance issues ce P rofile  An alyse
  • 25.
    Rep rod  Im prove Not just afix_it.py , sorry.1 Reproducing in a reliable way is the key.2 A load test can prevent this.3
  • 26.
    N S •R EXPERIENCE 2019 Thank you!  @nseinlet •  @nseinlet •  @nseinlet •  [email protected] #odooexperience