I have a table with 3 fields :
Id : int - primery key.
u_tid : int - index(unique).
u_name: text
I had near 13 milion records in a table. I imported a sql file containing 17 milion records (with some duplicate data regarding existing records in the table) into that table. The imported sql file uses "insert ignore into" for inserting rows. Import finished successfully, but after importing, I found more than 1 milion duplicate data in u_tid field in the table. How is it possible? Is it a bug in mysql? Or I have done somthing wrong?
Screenshot in PMA : screenshot 1
Screenshot for " ... where un_tid+0=110769551" in mysql cli: screenshot 2
Version : 5.5.46 on Ubuntu 14.04.2
CREATE TABLE IF NOT EXISTS `member_name` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`un_tid` int(11) unsigned NOT NULL,
`un_name` varchar(150) COLLATE utf8_persian_ci NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `un_tid` (`un_tid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_persian_ci;