9/11/2023 0 Comments Mysql batch update![]() ![]() RECORD LOCKS space id 12252 page no 54015 n bits 512 index IDX_MSISDN of table `my_schema`.`invoice_events` /* Partition `p201603` */ trx id 7031093 lock_mode X waiting *** (1) WAITING FOR THIS LOCK TO BE GRANTED: Update invoice_events set invoice_internal_id = 978173 where fecha_fac between ' 00:00:00' and ' 23:59:59.999' and source_msisdn in ( '239642983345' ) and invoice_internal_id is null and country_code = 'ES' LOCK WAIT 406 lock struct(s), heap size 41168, 2884 row lock(s), undo log entries 375 TRANSACTION 7031093, ACTIVE 0 sec fetching rows The SHOW STATUS ENGINE of the issue is: 11:08:23 0x7f8bcc1aa700 The MySQL's EXPLAIN for this statement is: The UPDATE statement is: update invoice_events PARTITION future VALUES LESS THAN MAXVALUE PARTITION p201611 VALUES LESS THAN (TO_DAYS('')), PARTITION p201610 VALUES LESS THAN (TO_DAYS('')), PARTITION p201609 VALUES LESS THAN (TO_DAYS('')), PARTITION p201608 VALUES LESS THAN (TO_DAYS('')), PARTITION p201607 VALUES LESS THAN (TO_DAYS('')), PARTITION p201606 VALUES LESS THAN (TO_DAYS('')), PARTITION p201605 VALUES LESS THAN (TO_DAYS('')), PARTITION p201604 VALUES LESS THAN (TO_DAYS('')), PARTITION p201603 VALUES LESS THAN (TO_DAYS('')), PARTITION p201602 VALUES LESS THAN (TO_DAYS('')), ![]() PARTITION p201601 VALUES LESS THAN (TO_DAYS('')), PARTITION p201512 VALUES LESS THAN (TO_DAYS('')), PARTITION p201511 VALUES LESS THAN (TO_DAYS('')), PARTITION BY RANGE( TO_DAYS(FECHA_FAC) ) ( ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_spanish_ci KEY `IDX_INV_INT_ID` (`INVOICE_INTERNAL_ID`), `USAGE_TYPE` varchar(50) CHARACTER SET utf8 DEFAULT NULL, `SERVICE` varchar(50) COLLATE utf8_spanish_ci DEFAULT NULL, `CATEGORY` varchar(50) COLLATE utf8_spanish_ci DEFAULT NULL, `TARGET_MSISDN` varchar(100) CHARACTER SET utf8 DEFAULT NULL, `SOURCE_MSISDN` varchar(50) COLLATE utf8_spanish_ci DEFAULT NULL, `COUNTRY_CODE` varchar(4) COLLATE utf8_spanish_ci DEFAULT NULL, `INVOICE_INTERNAL_ID` bigint(20) unsigned DEFAULT NULL, `RATE_ID` varchar(50) COLLATE utf8_spanish_ci DEFAULT NULL, `PRODUCT_ID` varchar(50) COLLATE utf8_spanish_ci DEFAULT NULL, `PERIOD_TYPE` varchar(50) COLLATE utf8_spanish_ci DEFAULT NULL, `FECHA_FAC` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, The table is: CREATE TABLE IF NOT EXISTS `invoice_events` ( The issue is a deadlock when some threads try to do an UPDATE.WHERE in a single table. Spring ThreadPoolTaskExecutor between 10 and 20 threads.Set f.category = t.category,f.animal=t.The next issue raises with this configuration: Insert into foo_tmp (id, category, animal) values ![]() changes to make animal a bit more specific. Insert into foo (id, category, animal) values Would love to have a DBA's thoughts on whether this is as efficient as the recommended "why don't we do it in a trans?" approach. Usually, when I've had to do this, I've created a temporary table, inserted my changes into that, and then done an update with the join as shown below. `b` bigint(20) unsigned NOT NULL DEFAULT '0', `a` bigint(20) unsigned NOT NULL DEFAULT '0', `id` int(10) unsigned NOT NULL AUTO_INCREMENT, I've removed column names due to them being irrelevant. If there really isn't a way that's faster than ON DUPLICATE KEY, would it be worth it to switch to PostgreSQL and use its UPDATE FROM syntax?Īny other suggestions are also greatly appreciated!Įdit: here's one of the tables that gets updated frequently. It baffles me that, as far as I can tell, there's no idiomatic, efficient way to do this in MySQL. Would either of these be faster than my current method? (which would be hard to generate due to the way I'm building the queries, and I'm not sure about the performance of CASE for hundreds/thousands of keys), and simply multiple concatenated updates. Other approaches I've seen are to update using SET value = CASE WHEN. I don't ever actually need to insert rows. VALUES (.), (.) ON DUPLICATE KEY UPDATE, which works to batch all of the values into one query, but executes excruciatingly slowly on large tables. I'm writing an application that needs to flush out a large number of updates to the database for an extended period of time, and I've gotten stuck at how to optimize the query. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |