Raise your voices!
-
Another idea! Making gluon shield that would be good against particle and neutron weapons.
We can extend capship weapons to all Freelancer weapon types and create new classes with available 3D models, for instance Bretonian Battleship could have more classes, one with more particle weapons and another with more tachyon weapons. It could improve ship variety in the gameplay and force the player to scan the ship he is going to fight.
Here is a cool lightning rod :
-
Freestalker.fr, variablility in playstyle and team tactics is the primary objective of weapon and shield (not only, of course) stuff, you’re right.
If i start to explain these moments right now, i will have to say everything what i write on a concepts, because every part is relative to another one. But be sure they are very close to the things you said. Maybe naming will differ a lot, but it can be replaced of course. Im thinking about to make a google-doc table for everyone who knows better be able to replace a stuff with their own, especially nationality-colored factions/houses. About battleship weapons, and weapons at all, every ship class shall recieve a unique new weapon types (besides common ones of course) - but some of them require a little coding. Scanning will be absoultely must-have, cuz vanilla table of damage blocking is expanded a lot, also, as you say, factions recieve 2 or 3 of 5 primary weapon types, and you cannot know which one will be used against you, same as shields. I’ve made an extremely cool shield system, where capacity does not equal efficiency at all and it is builded around 2 values - sqrt(cap*reg) and cap/reg, where first one is the real power, and the second one defines shield type and weapon type absorb. -
Some update about models.
space_dome http://i.imgur.com/BIJmjAK.jpg
space_factory01 http://i.imgur.com/hMjsd9i.png
space_freeport http://i.imgur.com/09Yf1iF.jpg
space_industrial http://i.imgur.com/YVwfD3u.png
space_industrial01 http://i.imgur.com/fORtjct.jpg
space_industrial02 http://i.imgur.com/AyAgcWd.jpg
space_police http://i.imgur.com/83rdadd.png
space_tankl http://i.imgur.com/Q3nI3MV.png
Here is a table of current state:
https://docs.google.com/spreadsheets/d/1Uh5qI_fBV9o0Ji_eBGFQ7ubuWBb3ScXZaVuPS9sG7Mc/edit?usp=sharingI am stuck at unwrapping and texturing, cuz have no experience in it. If you are a professional of this sphere and can help - let me know.
-
Oh, those are really nice models.
-
Nice models, although these sort of details typically go into normal maps (plus bump and ambient occlusion maps) rather than actual poly geometry. Problem with existing vanilla FL model assets is that they were made to reuse same set of textures without much consideration about edges and seams, so simply adding normal mapping over existing geometry would wouldn’t always have a correct result, especially where a model relies a lot on UV mirroring to tile textures and cover seams (and a lot in FL stations and misc assets do that). I guess there’s a sense in “remodeling”… However I don’t think these models, even when finished, would do well as drop-in replacements for vanilla mehses, there are mesh format limitations (16-bit vertex indices, though you can break model into multiple vmeshes internally but it comes with own set of issues too), old render being inefficient at handling this geometric complexity, the game does struggle with higher fidelity model assets.
-
Treewyrm, they actually do not reach poly limit and are about 10-15k.
Also i think that restructuring will allow to organise parts better and do not use same ones several times, and bring logics and possible proper use in dynamic ecenomics system later. Only issue i see is docking, but it is fixable one in possible future. About normal maps - they are cool on a long and mid range, but being close geometry is better a lot and allows to make sub-stuff by using normals on them.
Skotty., ty! -
Due to per-property vertex buffer format those 10+k models will run into 16-bit vertex index limit right away. Besides the game simply doesn’t handle complexity of such things, the engine has no smart buffer caching like static meshes copying does in unreal engine and many others. I’m saying this not to dissuade you but I ran into numerous rendering issues in FL when going with much fancier models.
There’s a good reason why normal maps is such a prevalent technique in real-time rendering and continues to do so as well. Back in the day graphics card manufacturers would boast about how many triangles they can render per second, but that sort of “mhz wars”-like comparison had been gone for a long time, instead the focus had been on smarter techniques and managing performance for complex shader code.
Sure models could be broken down to multiple LODs but I don’t recall if there were any games and their engines that would break mesh further into pre-baked state at close-up distances.
-
I did mention about splitting into multiple vmesh libraries to overcome vertex index size, so I wasn’t saying it’s impossible. I can dig up some of my own models and technically they all work, quirks with mixing material transparency types show up more prominently and so on. A lone ship sure ain’t gonna harm anything but drop in bases which are made of several parts and then consider that you may see several bases on the screen within view distance, a dozen or so ships of similar fidelity and I’m sure you get the idea where it’s going.
-
We’re spawning dozens of ships for NPC events, we’ve tested with large fleets of ships with polycounts in the 50k+, and it’s running fine with all the rendering stuff Freeworlds adds (which is way heavier than even Schmack’s thing).
I think you’re severely overestimating the cost of pushing some polygons and draw calls.
-
Quite obviously I can’t comment on what modifications are in Freeworlds, how they handle performance and on what systems. Pretty sure what I meant is running stuff on vanilla and testing lots of high-poly objects in view range. Especially on not-so-modern lower-to-mid-tier graphics cards as well as integrated gpus where things ain’t so peachy really and raw poly quantity makes them struggle.
Anyway all I wanted to say is that .Nx’s models are nice, but as mentioned before, most of these details, in my opinion, are better implemented as normal maps, indented parts perhaps as parallax mapping too. He has high-res models already so baking details into maps shouldn’t be difficult. What remains of course is the engine to put all that good stuff into.
-
Treewyrm, i can also confirm that vanilla does well with mass of polys. Another my old vid where you can see ~20k tradelanes and ~50k bases with no draw distance limit https://youtu.be/v7rKWzhJrF0?t=1m18s
FPS is low because of ENB with extremely old radeon 9800 card, being turned off it was common.
Anyway i model trying to optimize and keep topology perfect. -
As I said, Freeworlds has no optimizations. It’s drawing them exactly like FL does, except it draws them anywhere between 2 and 6 times more than vanilla (because of a depth prepass, G-buffer, and 2+ shadow passes). In spite of this and all the extra graphics processing we require, we can draw multiple hundreds of thousands of polygons per frame without breaking a sweat.
I’d rather not see yet another project get dragged down because a few people have shitboxes they hang onto for dear life. You can’t realistically game on an Intel GMA from 2005, why should a 2017 project try to make it work?
Plus, that’s why LODs exist. Dropping down the level of detail option in the game will eventually never draw LOD0, and as long as the models have good LODs everything’d be peachy.
-
FriendlyFire wrote:
I’d rather not see yet another project get dragged down because a few people have shitboxes they hang onto for dear life. You can’t realistically game on an Intel GMA from 2005, why should a 2017 project try to make it work?To be honest I was quite surprised how many in FL community keep using pretty outdated hardware, often times lagging four or five “generations” behind, but mostly it seems those “shitboxes” are actually laptops with only gma/intel hd stuff to handle any 3d. It’s kinda ironic how some and often the same people also happen to have smartphones that would be more capable of delivering good performance even in ogl es while these laptops will simply crawl.
But I guess I just like to refine meshes (and see them refined) even now at a time when it’s no longer necessary so, at least not like it used to be. What can I say… old optimization habits from the age of ue 2.x/3.x die hard, heh.
-
Generally I agree with Treewyrm.
I have an object with high poly-count in my mod which is used in one system multiple times (If I remember correctly 100 times over a relative large area). The 3d model itself was optimized before I put it into the mod and does actually not even exceed the limits of the good old Milkshape exporters.Still players have reported performance drops.
In another system I have a large station complex build mostly with vanilla models and it causes significantly lower FPS.
I observed lower performance years ago when I still had somewhat lower hardware specs in combination with high-detail models.
Certainly it depends on how much more detailed a 3d model is and how many models the game needs to render. Of course the hardware also is a factor here.
But lets also keep in mind that very often Freelancer players stick with the game because it is still working on their outdated hardware while games like StarCitizen or Elite most likely wouldnt even run.
And while LODs are mentioned here…. this requires the creation of such LODs first… which is extra work that… alot of extra work assuming that you cant simply take the old models as LOD1, LOD2 etc. since it will be very tricky to make the textures fit (the transition between the lods would look strange then).
I am not saying that its impossible but I am saying that its a bit more tricky than it appears on first view.
Its not just about replacing a few 3d models. Its about about creating new fitting textures (since most likely the old ones wont work), its about taking care of LODs, its about taking care of animations… surs (which have to fit to the new models, also in relation to animated parts). Of course it would b possible to use old surs… but then you have to be very precisely with the creation of the new visible models (and its LODs).
Maybe all of that has been covered already (dunno)… I just wanted to point out what needs to be considered in the creation process.And like Treewyrm said… it also should be considered that more detailed objects could lead to performance issues (depending on many different factors).
My observations over the years simply cover with what Treewyrm wrote. I am not able to tell whats possible in Freeworlds or under which conditions your observations happened. -
I don’t understand problem with LODs. Basic solution:
- On new model use sizes of default models (just with enhanced graphics)
- Then you don’t need to create new surfile and wireframes.
- Use default models (from vanilla)
- Move all lods in model to next level (lod0 must be lod1 , lod1 -lod2 etc)
- Add new geometry to model and use it as lod0
And that’s all! If your PC can’t work with new models, you need just change graphics settings.
-
You can very quickly generate LODs using polygon decimation techniques. Since high-poly geometry tends to add a lot of fine details, those are very easily removed to drop polygon count. Modern software (i.e. not Milkshape) will do this automatically and preserve UV’ing, so the textures don’t have to be modified.
Since the models are only seen from a certain distance away, the small imperfections are largely unnoticeable, and if they are, minor manual corrections tend to be sufficient.
-
Well Ive seen multiple solutions to reduce polygon numbers in 3dsmax but none of which I would call good at preserving UVs.
I might have tried the wrong methods there… dunno.And while I think using old LODs can technically be done I see it tricky because the new models need new textures, new UVs. And if these dont fit exactly the original texture layout and positioning you will get very bad visual results exactly when the LODs change.
Im not saying that it is impossible but with that method the UV mapping of the new models is going to be a very tricky job (at least if you intend to have smooth LOD transitions). -
What i have to say about this.
First of all, keeping in mind everything that should be made with FL to revive and evolve, this problem is almost nothing and can be fixed later in lower priority.
At second, i do not count it as problem at all as i am sure FL can run perfect with these models. Yes, i will make a mipmaps for textures and suppose this is enough. Making a LODs for ~15k avg models in 2017 is kinda strange, but of course we can just delete some small details without doing anything with textures as FF says, what can significantly lower a polycount.
At last, maybe if you have some lags with vanilla models (which are made in quite shitty way - you can notice it importing to 3DsMax and do some work with polys), maybe there is a time to get at least HD68xx with an FX, which are extremely cheap now especially on secondary markets. I use them and feel very fine while wait a Threadrippers for a big update. I mean, if developers would always rely on this, we wouldn’t see any of new games at all. I get an importance of optimisation, but standing on a place for a years is not one.
Maybe you are right as i do not know all of the issues. But i have to try and think that we shall solve the problems by they come. -
I just tried to point out a few things to consider.
Dont worry too much about it… for every problem there is a solution.