In my understanding Midjourney is primarily just one or more very carefully 'finetuned' Stable Diffusion models (which basically means "trained for extra steps on specific material to emphasize specific capabilities", such as a particular style, focus on compositional ability, etc. - such as by training it specifically only on images with high compositional quality). You can get much more artistic results if you can run SD on your own graphics card using a finetuned model. This requires some tech savvy, but the bar on that (and how big of a graphics card you need) is being steadily lowered by open-source software hobbyist efforts.
There are two websites where the open-source AI community are primarily (to my knowledge) sharing their own finetuned/merged models. The original Stable Diffusion models were better than anything that came before, but the main way to get quality approaching that of Midjourney is by using a model finetuned for higher quality outputs. The original Stable Diffusion (SD) models were trained on general databases of all sorts of images in terms of content and quality. This results in a model with good general ability but poor performance in specific or complicated areas, like hands and faces which require fine and exact detail/structure to get right. Finetuned models often specifically use only artistic or high quality images to further train these models into art specialists, essentially, with whatever focus is important to the trainer - such as realism, better success rates at rendering faces and hands, and/or a particular style like cartoon/anime or 3D model imitation (such as Pixar) or painting styles.
Hugging Face – The AI community building the future. (More oriented towards software-savvy people, it's a little bit like github I think. Both GPT text models and Stable Diffusion - SD - models are shared here, along with a variety of other AI resources, including entire databases that can be used for training or evaluating the performance of AI models.)
Civitai | Stable Diffusion models, embeddings, LoRAs and more (Stable Diffusion image models only. Oriented specifically toward sharing/downloading models made by hobbyists/enthusiasts. All of these models are finetuned from the same fundamental Stable Diffusion models - v1.4, v1.5, v2.0, and v2.1, primarily, I think, as those were the public releases.)
The second website is also a place for people to post their own images, sharing what they were able to make with each model. Be warned! There is a fair amount of NSFW (i.e. adult content) imagery, especially if you turn the filter for that off. And there are very certainly models that are made specifically for porn, though that's not the only motivation.
There is a complicating factor when it comes to rationalization there, I think, and understanding the lay of the land, since if you train an SD model with no nude content, it becomes terrible at the human form. Who knew you gotta understand what's under the clothing to render it properly! (Answer: art teachers, lol.) And since it's hard to create a database including nude images without ending up with some porn in it, and because some people want the porn anyhow, it complicates matters.
They say (though I am not educated in this area yet- Jordan Peterson mentioned it once, he may have mentioned a book or some researchers) the early growth of internet connectivity and tech was significantly spurred by pornography. Seems there's an aspect of that here, too. The same influence is present in open-source GPT (text gen) finetuning/training as well.
My feeling is that this factor is a convenient conflict of interest that "naturally" emerges to...
1. Muddy the waters and distract/divide some percentage of the efforts of people with otherwise somewhat altruistic or creative/curiosity fueled aims working on Open Source AI.
2. Create plausible deniability for suppression of Open Source AI efforts so that AI can be centralized and restricted for "ivory tower" training, use, and distribution. Consider how making the internet more "safe" (some of which was sensible and beneficial, don't get me wrong) has also enabled a lot of biasing, skewing, and burying of information compared to the earlier days. "Safety" as an excuse for censorship and propaganda.