MySQL InnoDB 삽입이 왜 그렇게 느린가요?
큰 난수를 키로 사용하고 있습니다 (다른 시스템에서 수신 됨). 상당히 작은 (수백만 행과 같이) 테이블에 대한 삽입 및 업데이트는 합리적이라고 생각하는 것보다 훨씬 오래 걸립니다.
설명하기 위해 매우 간단한 테스트를 추출했습니다. 테스트 테이블에서 가능한 한 간단하게 만들려고 노력했습니다. 내 실제 코드에는 이러한 간단한 레이아웃이 없으며 관계 및 추가 색인 등이 있습니다. 그러나 더 간단한 설정은 동일한 성능을 보여줍니다.
결과는 다음과 같습니다.
creating the MyISAM table took 0.000 seconds
creating 1024000 rows of test data took 1.243 seconds
inserting the test data took 6.335 seconds
selecting 1023742 rows of test data took 1.435 seconds
fetching 1023742 batches of test data took 0.037 seconds
dropping the table took 0.089 seconds
creating the InnoDB table took 0.276 seconds
creating 1024000 rows of test data took 1.165 seconds
inserting the test data took 3433.268 seconds
selecting 1023748 rows of test data took 4.220 seconds
fetching 1023748 batches of test data took 0.037 seconds
dropping the table took 0.288 seconds
MyISAM에 1M 행을 삽입하는 데는 6 초가 걸립니다. InnoDB에 3433 초가 걸립니다 !
내가 도대체 뭘 잘못하고있는 겁니까? 잘못 구성된 것은 무엇입니까? (MySQL은 기본값이있는 일반적인 Ubuntu 설치입니다)
다음은 테스트 코드입니다.
import sys, time, random
import MySQLdb as db
# usage: python script db_username db_password database_name
db = db.connect(host="127.0.0.1",port=3306,user=sys.argv[1],passwd=sys.argv[2],db=sys.argv[3]).cursor()
def test(engine):
start = time.time() # fine for this purpose
db.execute("""
CREATE TEMPORARY TABLE Testing123 (
k INTEGER PRIMARY KEY NOT NULL,
v VARCHAR(255) NOT NULL
) ENGINE=%s;"""%engine)
duration = time.time()-start
print "creating the %s table took %0.3f seconds"%(engine,duration)
start = time.time()
# 1 million rows in 100 chunks of 10K
data = [[(str(random.getrandbits(48)) if a&1 else int(random.getrandbits(31))) for a in xrange(10*1024*2)] for b in xrange(100)]
duration = time.time()-start
print "creating %d rows of test data took %0.3f seconds"%(sum(len(rows)/2 for rows in data),duration)
sql = "REPLACE INTO Testing123 (k,v) VALUES %s;"%("(%s,%s),"*(10*1024))[:-1]
start = time.time()
for rows in data:
db.execute(sql,rows)
duration = time.time()-start
print "inserting the test data took %0.3f seconds"%duration
# execute the query
start = time.time()
query = db.execute("SELECT k,v FROM Testing123;")
duration = time.time()-start
print "selecting %d rows of test data took %0.3f seconds"%(query,duration)
# get the rows in chunks of 10K
rows = 0
start = time.time()
while query:
batch = min(query,10*1024)
query -= batch
rows += len(db.fetchmany(batch))
duration = time.time()-start
print "fetching %d batches of test data took %0.3f seconds"%(rows,duration)
# drop the table
start = time.time()
db.execute("DROP TABLE Testing123;")
duration = time.time()-start
print "dropping the table took %0.3f seconds"%duration
test("MyISAM")
test("InnoDB")
InnoDB doesn't cope well with 'random' primary keys. Try a sequential key or auto-increment, and I believe you'll see better performance. Your 'real' key field could still be indexed, but for a bulk insert you might be better off dropping and recreating that index in one hit after the insert in complete. Would be interested to see your benchmarks for that!
Some related questions
- Slow INSERT into InnoDB table with random PRIMARY KEY column's value
- Why do MySQL InnoDB inserts / updates on large tables get very slow when there are a few indexes?
- InnoDB inserts very slow and slowing down
InnoDB has transaction support, you're not using explicit transactions so innoDB has to do a commit after each statement ("performs a log flush to disk for every insert").
Execute this command before your loop:
START TRANSACTION
and this after you loop
COMMIT
I've needed to do testing of an insert-heavy application in both MyISAM and InnoDB simultaneously. There was a single setting that resolved the speed issues I was having. Try setting the following:
innodb_flush_log_at_trx_commit = 2
Make sure you understand the risks by reading about the setting here.
Also see https://dba.stackexchange.com/questions/12611/is-it-safe-to-use-innodb-flush-log-at-trx-commit-2/12612 and https://dba.stackexchange.com/a/29974/9405
I get very different results on my system, but this is not using the defaults. You are likely bottlenecked on innodb-log-file-size, which is 5M by default. At innodb-log-file-size=100M I get results like this (all numbers are in seconds):
MyISAM InnoDB
create table 0.001 0.276
create 1024000 rows 2.441 2.228
insert test data 13.717 21.577
select 1023751 rows 2.958 2.394
fetch 1023751 batches 0.043 0.038
drop table 0.132 0.305
Increasing the innodb-log-file-size
will speed this up by a few seconds. Dropping the durability guarantees by setting innodb-flush-log-at-trx-commit=2
or 0
will improve the insert numbers somewhat as well.
The default value for InnoDB is actually pretty bad. InnoDB is very RAM dependent, you might find better result if you tweak the settings. Here's a guide that I used InnoDB optimization basic
What's your innodb buffer-pool size? Make sure you've set it to 75% of your RAM. Usually inserts are better when in primary key order for InnoDB. But with a big pool-size, you should see good speeds.
This is an old topic but frequently searched. So long as you are aware of risks (as stated by @philip Koshy above) of losing committed transactions in the last one second or so, before massive updates, you may set these global parameters
innodb_flush_log_at_trx_commit=0
sync_binlog=0
then turn then back on (if so desired) after update is complete.
innodb_flush_log_at_trx_commit=1
sync_binlog=1
for full ACID compliance.
There is a huge difference in write/update performance when both of these are turned off and on. In my experience, other stuff discussed above makes some difference but only marginal.
One other thing that impacts update/insert
greatly is full text index. In one case, a table with two text fields having full text index, inserting 2mil rows took 6 hours and the same took only 10 min after full text index was removed. More indexes, more time. So search indexes other than unique and primary key may be removed prior to massive inserts/updates.
things that speed up the inserts:
- i had removed all keys from a table before large insert into empty table
- then found i had a problem that the index did not fit in memory.
- also found i had sync_binlog=0 (should be 1) even if binlog is not used.
- also found i did not set innodb_buffer_pool_instances
Solution
- Create new UNIQUE key that is identical to your current PRIMARY key
- Add new column
id
is unsigned integer, auto_increment - Create primary key on new
id
column
Bam, immediate 10x+ insert improvement.
ReferenceURL : https://stackoverflow.com/questions/9819271/why-is-mysql-innodb-insert-so-slow
'development' 카테고리의 다른 글
Intent.putExtra 목록 (0) | 2021.01.05 |
---|---|
스토리 보드에서 "initwithNibName"을 어떻게 변경합니까? (0) | 2021.01.05 |
제한을 사용하는 JPA 2 CriteriaQuery (0) | 2021.01.05 |
배열을 병합하고 키를 보존하는 방법은 무엇입니까? (0) | 2021.01.05 |
tkinter로 창 크기가 조정되는 것을 어떻게 막을 수 있습니까? (0) | 2021.01.05 |