actually awesome and fast search engine (depending on which instance you use) with no trashy AI and ADs results also great for privacy, if you don’t know which instance to use go to https://searx.space/ and choose an instance closest to you
Whoogle is a good option for self hosting as well
I stopped using it not because of the results but because you couldn’t swipe back without it sending you to the base website.
On DuckDuckGo (and google n others) a search is shown in the URL like looking for frog:
https://duckduckgo.com/?q=frog&t=fpas&ia=webHowever in SearXNG it just shows
https://searxng.world/search
Which I don’t have an issue with, however when you click on a link and then go back to the search results it would have no idea what you searched for as it’s not in the URL and show an error.That aside, the UI is great. icons don’t swap around on you like Google or have annoying popups about ‘privacy’ like DDG. On the topic of search results, it was good enough for me. Not great but then again there aren’t any good search engines right now.
Aren’t all search queries available to whoever hosts an instance? In my eyes this is much worse to privacy and a much bigger risk unless you really know who is behind your chosen instance. I would trust some a company a bit more with safeguarding this information so it does not leak to some random guy.
I’ve always gotten the impression it was mostly intended to be self hosted. I’ve self hosted it for something like a couple years now, runs like a clock. It still strips out tracking and advertising, even if you don’t get the crowd anonymity of a public instance.
Self hosting doesn’t make sense as a privacy feature because then it’s still just you making requests to google/other SE
It’s not useless, it removes a lot of the tracking cookies and such and sponsored links loaded with telemetry. Theoretically you can also get the benefits of anonymity if you proxy through Tor or a VPN, which I originally tried to do but turns out Google at least blocks requests from Tor and at least the VPN endpoint I have and probably most of them. Google or whatever upstream SE can still track you by IP when you self host, but its tracking is going to much less without the extra telemetry cookies and tracking code it gets when you use Google results directly.
But yes, practically you either have to trust the instance you’re using to some extent or give up some of the anonymity. I opted to self host and would recommend the same over using a public instance if you can, personally. And if privacy is your biggest concern, only use upstream search providers that are (or rather, claim to be) more privacy respecting like DDG or Qwant. My main use case is primarily as a better frontend to search without junk sponsored results and privacy is more of a secondary benefit.
FWIW, they have a pretty detailed discussion on why they recommend self-hosting here.
Companies are definitely selling your data. Use a VPN.
A VPN will not save you, they are easily worse for privacy in terms of user tracking. It centralises your entire web traffic in a single place for the VPN provider to track (and potentially sell).
You either trust the ISP or a VPN. Its a tool not a blanket of protection. Opsec and knowing how to move is most important.
But you pay more for what is essentially the same with a VPN. You have to buy a VPN subscription on top of your internet subscription, get less speed because your internet traffic is being routed through a different country and get no benefit to privacy. The only use case for a VPN is when you have to bypass georestrictions.
As someone who hosts an instance, news to me lol
Edit: Developer says this can’t be done currently? Reddit comment
Of course it can be done, check your web server logs.
If you are using GET requests to send search queries to searxng, what you searched for will show up in the logs as
2024-10-31 123.321.0.100 /?query=kinky+furry+pictures
If you use POST requests the server admin can also easily enable logging those.
People hosting searxng can absolutely see what you searched for, along with your IP address, user agent string etc.
Well my instance’s logs are sent to null for this reason already, but thank you for the info!
Thanks for clarification and great that this is not included in project, but couldn’t someone change the server side code and somehow see more info that goes through?
I know there is that HTML check in https://searx.space/ to see if search interface code is not heavily modified, but on server side anything could go on.
If requests are encrypted in a way that searxng does not see contents then it probably is not trivial to do, but there always is a possibility something clever could be done.
It doesn’t bother me one bit of you know my search history. You’ll learn I search a word to see if I know your to spell it properly and that I DIY a lot of stuff lol
Billions of Chinese also aren’t bothered that they live under surveillance, but it isn’t right.
Good thing it only happens to the Chinese…
it’s like using google from 2000’s; thanks for sharing
If you set it as your default search in chrome or such, it will convert the Google search bar in Android to a SearXNG search bar. I started using it a little while back. Firefox never did well for me on Android (I’m sure it’s anecdotal)
Man, i wish i had the same experiencr
The couple of times I tried it out, the search results where barely accurate
Try kagi. It’s paid at $5/mo., but you get 300 searches to try it out.
Been rocking self-hosted Searxng for the last 3 weeks now as my default search engine; it’s as good or better than DDG and certainly better than Google. Results I need are usually within the first three items, no extraneous shit.
I thought I’d just try it out, but it’s staying. The ability to tune the background engines is awesome. My search history is private (though I wasn’t that worried about DDG, there was no way in fuck I was using Kagi) since it’s running it’s searches via a VPN and returning me results locally.
Why wouldn’t you use Kagi?
Because I don’t want a direct link to payment information and my search history stored and sold later.
it’s as good or better than
It’s only as good as the search engines you select. Which ones have you selected?
Defaults are working fine, I might have added one or two.
What does it default to? Google+bing+DDG?
Keep in mind that to protect your privacy you should also share your instance with others. All the searches are still linked to an IP which can be abused as well.
Yes, that’s the purpose of the VPN. It’s out there mixed in with everyone else that’s using that exit node.
Honestly, it’s not too much of a concern to me, I’m not doing anything illegal or naughty, it’s just making sure I’m not part of the dataset.
Yep, a VPN is a good solution.
How does it work self hosting? Is it querying other search engines or just maintaining a database on your server?
It’s a meta search engine: it aggregates results from multiple sources for your search query. So yes, it queries other search engines.
Its all calls to other engines, that you can choose and tune. So its making those calls and filtering out shit like AI results, and then ranking it to return back to you. Seems to do a good job.
I really wish there was a privacy oriented search engine with decent search results …
I’ve been pretty happy with Kagi and they have a decent privacy policy: https://kagi.com/privacy
At this point, I just wish there was a search engine with decent results …
It’s ok at best, when it works. When it runs out of API hits for the day at noon, you need to use something like https://searx.neocities.org/ and retype your search multiple times until you manage to hit an instance that can actually perform a search.
Also, no suggestions.
I use this daily and just wanted to highlight two downsides:
-
1 some instances are quite slow in response
-
2 some instances are non English, so everything except search results might be unreadable unless you know that language
The second one has been happening less frequently recently though, not sure if there are just more English instances or some other reason behind it.
-
Self host it, it’s nothing to set up.
Doesn’t that defeat the only benefit - anonymity?
It strips the tracking data to and from the engines, so if you tuck it behind a vpn, GG.
Not if it runs the queries it sends out via a VPN where it mingles with thousands of other requests. An API call doesn’t have the disadvantages of browser fingerprinting, cookies, etc that are used to build a background of a user browsing to your search engine and track their searches. Also, there is no feedback to the search engine about which result you choose to use. If you allow outside users, it would further muddy the waters.
Ideally, you’d have it run random searches when not being used to further obfuscate the source.
Did anyone else not know Lycos was still around? Lycos is still around. Seems not that bad. No AI stuff.
Can someone explain the meaning of the name and the people building this project please?
You’re trusting how information is filtered and funneled to you with a search tool, but a change to take lightly. Google sucks, but they have a lot to lose, a lot of eyes on them and I know generally their base motivations.
Fortunately, you can read through the source code of SearxNG and even modify it - provided that you also publish the modified version to your users if you host it publicly.
You can run your own instance, public or private. Or you can use a public instance.
Internally, it uses other search engines, rather than crawling the entire web and indexing everything.
I mean it’s often better than nothing, but it is a meta search that still often uses Google or Bing to gather results. IMHO, cut off the need for that data on the whole and use an option like Mojeek
For all their talk of doing things different with their own index and rankings. Mojeek is following exactly what Google did. It’s still an ad based business model that makes users into products to be sold to advertisers. They’re good now, while still trying to build market share. But once their investors get hungry, the enshitification will commence.
we make money mainly from our api, our investors are patient private capital and we don’t take vc, appreciate your point but these are fundamentally different situations, our ads (when they run) will also be contextual so more of a ddg situation than a “makes users into products to be sold to advertisers”
fair enough if it’s not for you though
I don’t know if the comparison is inaccurate.
You make money from advertising to your users (“ddg situation” notwithstanding), are beholden to your investors (private status notwithstanding) and need to see more users to increase revenue. The person above you is saying that this model is what will drive you to eventually be as bad as Google. Do you understand?
We make money from our API, what they’re referencing is a beta ads programme which was running
API index access is an important difference.
If it was only that, without public facing ad driven search, I’d be more impressed.Maybe if you removed the adds, and severely rate limited your own public facing search, so it’s more of a demo than an actual service. This would force you to solely make money off the API access, without directly competing against those customers.
That would be an honest buisness model. One that doesn’t turn users into eyeballs for advertising. Which seems to me, to be the most insidious problem of the modern internet.
Agree to disagree here, but I’ll refer to Cory Doctorow for a contextual vs behavioral/tracking ads comparison, one which is very good: https://pluralistic.net/2020/08/05/behavioral-v-contextual/#contextual-ads (applied to the media, but the general thread is relevant)
Do you have topics that are censored? I searched for my reddit post “what I’ve learnt from the mantis aliens”, and it does not show up in your results. Neither at google’s. But it does on other search engines. The ufo/alien stuff are censored in most search engines, while there isn’t a reason to be. That is how I judge search engines. And Mojeek doesn’t give me the results I asked for.
Reddit doesn’t allow us to crawl: https://www.reddit.com/robots.txt
Is that legally binding? What happens of they catch you, ban your IPs then you’re in the same situation as now. Literally no reason to not do it IMO.
IP already hits a wall, also better to not get a reputation as a bad bot, it’s taken a while to get known for being friendly and respecting rules, to us you should follow robots
I seem to recall creative ways to index things without robots, e.g. browser addon that users opt into to send pages and such, essentially crowdsourcing the indexing. Anyways good to see you’re taking the high road!
Mojeek is cool, but trying to search something in my first language results in 0 results find.
which language are we talking?
Mojeek reminds me of early Google results which only searches title and inurl I like it
You can use Mojeek with SearXNvm with nothing enabled but Mojeek returned no results, I wonders why is that?I could be wrong but didn’t Mojeek also index results from Google and Bing?I’m wrong they index their own results, I mean Qwant is a search engine built in EU and they index their own resultsqwant is bing, mainly
The homepage took 5 seconds to load. I’ll pass.
not sure what’s happening there for you, speed is one of the things which people frequently say we do well for
Let me tell you about waiting for AskJeeves to load up in the 90s.
Web_crawler_ indeed
Not at all for me
I’d use it if its public instances didn’t get rate-limited so often
The best thing is to self host it, to have better uptimes and custom default settings
4get.ca has been great for me.
Dig out old PC from somewhere, install some Linux distribution, Tailscale and Docker/Podman, and install SearX that way.
What if you want to use the computer for something other than Google searching?
Install the other stuff on it