Opened 12 years ago
Closed 9 years ago
#237 closed question (invalid)
Deleting collections should be split into several transactions
Reported by: | Dimitar Misev | Owned by: | Dimitar Misev |
---|---|---|---|
Priority: | minor | Milestone: | Future |
Component: | qlparser | Version: | 8.3 |
Keywords: | Cc: | Peter Baumann, abeccati, sebrecht, damiano, mantovani | |
Complexity: | Very Hard |
Description (last modified by )
It seems like deleting a collection is done in a single transaction, as the rasdaman log gets stuck on the below when deleting a lot of data
[2012-11-09 14:56:05] request from 127.0.0.1 Command: executeUpdate query 'drop collection jacobs_rasimport'...parsing...parsed. checking semantics...evaluating...ok done if [2012-11-09 14:56:06] request completed; request time=0.014s [2012-11-09 14:56:06] request from 127.0.0.1 Request: 'commit TA'...
For a lot of data this easily exceeds some postgres configuration parameter that sets the max locks per transaction, and then the update statement fails with a base DBMS error 206.
The postgres logs give out these hints:
... 2012-11-09 14:37:45 CET HINT: Consider increasing the configuration parameter "checkpoint_segments". 2012-11-09 14:38:09 CET LOG: checkpoints are occurring too frequently (24 seconds apart) 2012-11-09 14:38:09 CET HINT: Consider increasing the configuration parameter "checkpoint_segments". 2012-11-09 14:38:34 CET LOG: checkpoints are occurring too frequently (25 seconds apart) 2012-11-09 14:38:34 CET HINT: Consider increasing the configuration parameter "checkpoint_segments". 2012-11-09 14:38:58 CET LOG: checkpoints are occurring too frequently (24 seconds apart) 2012-11-09 14:38:58 CET HINT: Consider increasing the configuration parameter "checkpoint_segments". 2012-11-09 14:39:24 CET LOG: checkpoints are occurring too frequently (26 seconds apart) 2012-11-09 14:39:24 CET HINT: Consider increasing the configuration parameter "checkpoint_segments". 2012-11-09 14:45:29 CET WARNING: out of shared memory 2012-11-09 14:45:29 CET ERROR: out of shared memory 2012-11-09 14:45:29 CET HINT: You might need to increase max_locks_per_transaction.
Increasing the indicated parameters seems to help, but eventually it still fails (it just takes longer).
Change History (11)
comment:1 by , 12 years ago
Description: | modified (diff) |
---|
comment:2 by , 12 years ago
Milestone: | → 8.4 |
---|
comment:3 by , 12 years ago
Milestone: | 8.4 → 8.5 |
---|
comment:4 by , 11 years ago
Cc: | added |
---|---|
Complexity: | → Very Hard |
Milestone: | 8.5 → 9.0 |
comment:5 by , 11 years ago
Indeed, Susanne, you can help us hopefully. IMO this should be fixed with PostgreSQL - splitting a user transaction into several independent transactions is breaking the transaction contract from a client perspective.
comment:6 by , 11 years ago
Cc: | added |
---|
Just before open a duplicate ticket i found this issue already under discussion.
I put MEEO staff in cc.
comment:7 by , 11 years ago
Note that we have this issue only with postgres. It doesn't happen with blob storage on the filesystem.
comment:9 by , 11 years ago
Type: | defect → question |
---|
If PG does it _very_ naively (BFIM and AFIM for each blob), it might end up with 2* collection size.
If it does it the rude way (no TA support for BLobs), it will be mainly a tuple per object + index tuples + blob deallocation time.
Note that Blob behavior in the past tended to change over different versions, so v9 might be most interesting to look at.
Curious about Susanne's opinion on this PostgreSQL issue.
Also changed the type to "question", this being a PostgreSQL issue, not a rasdaman one.
comment:10 by , 10 years ago
Milestone: | 9.0.x → Future |
---|---|
Priority: | major → minor |
comment:11 by , 9 years ago
Resolution: | → invalid |
---|---|
Status: | new → closed |
Could be something for Susanne to look at some point, along with the blob → bytea update?