Also, nothing wrong with your stats assuming you're picking the t-test for unequal variances. It's the appropriate test. Also I'm not an expert statistician but use enough of these sort of tests in my field that I can do this without running to an actual statistician. Agree with the visual depiction of the 95% CIs means more to most and arguably are more meaningful that a statistical test to derive a p-value.
Despite me providing some samples sizes you would need to detect "statistically significant" difference between groups of different sizes. This is mostly an incorrect usage of p-values and power and is borderline meaningless other than an academic curiosity.
I warn that the importance of statistical significance should be tempered with an understanding of the data and what a statistical test, and p-value means (note it's not actually what was taught in high school textbooks, nor many a 101 stats textbook).
Also the fact that "significance" when we're talking stats doesn't mean "meaningful or important" nor my result is "correct".
It's a bloody hard concept to understand and I'm not certain I understand p-values fully. However, If you are interested in understanding it more and learning how misunderstood and how pervasive the incorrect usage of p-values is I would suggest reading the 13 misconceptions about p values here: https://www.ohri.ca/newsroom/seminar...03,%202014.pdf
And the delightfully named paper: The insignificance of statistical significance testing: https://core.ac.uk/download/pdf/188120111.pdf
Or a simple run down here: https://en.wikipedia.org/wiki/Misuse_of_p-values
In short make decisions on the observed effect, it is the most meaningful thing in your data. But understand with less samples the observed effect may be an extreme result on the basis of chance because of the variability of the data.
Do not dismiss results because they are not "significantly different" nor should you accept results simply because they are "significant".
Bookmarks