搜索
查看: 9280|回复: 13

Linux下加pingboost 3参数,sys_tickrate 3000之后出现的奇怪问题

[复制链接]
发表于 2015-4-9 00:09:54 | 显示全部楼层 |阅读模式 来自 中国–江苏–南京–秦淮区
Linux下的服务器,启动加pingboost 3参数,sys_tickrate 3000之后出现的奇怪问题

这样fps是稳定在1000了,可是进去以后切枪,开枪以及每一局的时间都加速了,变的很快很快。

谁知道怎么回事,有没有办法在Linux下有能稳定fps 1000又不会出现这种加速bug啊。

 楼主| 发表于 2015-4-21 16:30:01 | 显示全部楼层 来自 中国–浙江–绍兴
自己找到了问题的原因,首先我想先转一篇好文章给大家看看。
“The 1000 FPS Fairy Tale
Friday, 20 April 2007
For a couple of months, a lot of game server providers offer special high performance servers that run at 1000 FPS. These 1000 FPS make the server run a lot better and more precisely compared to standard settings. Players hit better and the calculation of the player’s positions is also a lot more precise and if you are a “competitive player” in a “professional gaming league”, you of course need one of these 1000 FPS servers.

If you take a closer look at the details concerning these 1000 FPS servers, you’ll find that what most providers tell you is very unclear and confusing. Sometimes they try to give the impression that their servers are “FPS certified” in some way, without telling you what this certification really is. We can assume that this certification has been invented by the game server providers themselves or that it only expresses some subjective opinion. Do game servers that run at 1000 FPS really offer a smoother gameplay with better hit registration? No, because they simply can’t! Here’s why …

In theory

All events on a CS:S or CS1.6 server are calculated by frame. All positions, directions and speeds of one point in time are summarized in one frame. A frame is similar to a snapshot or still image of a movie. The more frames a server calculates per second, the more precise are its data. At 1000 frames per second (FPS) the server calculates its “world” at a rate of one frame per millisecond. At 100 FPS it calculates the same “world” at a rate of one frame per 10 milliseconds. So far the argument of a higher precision is still valid … but only as long as you look at the isolated server. In practice all this remains without any use to the client.

In practice

Here the players (clients) have to be taken into consideration, too. The server may calculate its “world” at 1000 FPS. That is only a millisecond for every calculated frame. On the other end the clients do not get their updates at such a high rate but a lot slower. How often they can be updated per second is determined by the server’s tickrate. The tickrate is usually set to 66 or 100 for high quality servers. The server freezes its “world” per tick and then decides which clients it sends its data to. But the server doesn’t send all its information. It only sends the changes compared to the last update. The tickrate defines how often the server takes snapshots of its frames that can then be sent to the clients. This way a client gets only 100 updates per second from a tickrate 100 server. On the other hand the clients also send the server commands. Here the tickrate also determines how many commands per second a server accepts from a client. This would be also only one command per 10 milliseconds for a tickrate 100 server.

This is the point where the house of cards finally collapses. What is the server supposed to calculate its “world” with every millisecond if it only gets one command from all clients every 10 milliseconds? All the server can do is calculating its “world” with old data 90% of its time. If you now also take into consideration that there are different latencies that influence the rates at which the clients send their commands to the server and that the server buffers these commands in queues, 500, 600 or even 1000 FPS make no sense at all. The server usually has to work data that is about 50 milliseconds old or even older. During this time a lot of things may have already changed and some inputs (mouse movement) may have been given on the client side. So the server has to predict events – it has to guess what the clients will do next. It maybe calculates something completely different movement from what the player really does … with a precision of one millisecond at 1000 FPS. It does not matter if the server runs at 333 FPS or 1000 FPS. A wrong guess stays a wrong guess. At 1000 FPS it is just “more precisely wrong”.

Now you might ask “But what if the server guesses right?” True, if the server predicts correctly the position of a player is indeed more precise – on the server! But not on the client. Both engines for CS:S and CS1.6 act on the assumption that the server’s and the client’s time is in sync. The servers time is used for all clients. So the server saves a so called frame time for every calculated frame. It does this every millisecond on good Linux servers. The client uses the frame time that it receives with every update as its own time. From a 1000 FPS server with a tickrate of 100 the client should receive updates in which the frame time has advanced 10 milliseconds. Even if the server has predicted the player’s actions correctly, 1 or 2 milliseconds latency in a packet’s run time from the server to the client are enough to make all data imprecise again. So again, it does not matter if the server runs at 333 or 1000 FPS. Not to mention, latencies are usually a lot higher.

Some might say “But the server includes latencies into its calculations!” True again. The server does indeed take the client’s latencies into account. In order to calculate latencies correctly, the server needs a command packet from the client. This value is determined across a series of command packets and then included in the server’s calculations. But even finding the average makes all this imprecise again. Even more importantly the server uses old data to calculate that average latency. Again no difference between a 333 and a 1000 FPS server.

Some players think that with settings of cl_cmdrate 100, cl_updaterate 100 and rate 30000, you get exactly that from the server. But that is also wrong. In both CS:S and CS1.6 the server is authoritative. That means that under any circumstances it is the server that decides how many updates it sends and how many commands it accepts per second. The client maybe claims to receive 100 updates per second but the server sometimes just delivers 90.
Here 1000 FPS are also counterproductive. If the server is under heavy load and can’t calculate its 1000 FPS anymore, it decides to send the clients fewer updates and to process fewer commands in order to reach its maximum rate of FPS because the engine prioritizes the FPS rate achievement over processing updates and commands. This is important because the Source Engine but also the old Half-Life Engine do all their calculations per frame. No frame, no calculation, no update. The engine will always try to reach its pre-set FPS under all circumstances and will simply drop all commands and commands from and to the clients if necessary. This is why in this case a server that runs at a constant 333 FPS is a lot more precise than a server that has to switch between 500 and 1000 FPS all the time, which is exactly what most tuned 1000 FPS servers currently do.

Conclusion:

It is not important if a server runs at 333, 500, 600 or even 1000 FPS. Any of these frame rates make a server fast enough. It is far more important that the server has a high quality internet connection and always reaches its pre-set FPS. Only Valve Software can make hit registration better by improving the algorithms responsible for prediction, extrapolation and interpolation and thus making these more precise.

Don’t let yourself be fooled by fake “1000 FPS certificates”. Don’t let anyone force you to play on a 1000 FPS server in wars or matches. It is all just a marketing gag – nothing more or less.”

EDIT: It was also stated by Valve many times.

Example:
“A valve programmer (Mike Dussault) is quoted in stating that a Source server will sleep at every frame above the tickrate.”
forums.srcds.com/viewpost/58322#pid58322

Sorry, that was probably not the point of the thread, but It’s always annoying to see that people still fell for the server FPS scams.

回复

使用道具 举报

发表于 2015-4-23 21:38:27 | 显示全部楼层 来自 中国–上海–上海
我服务器2003 从来FPS都是300-512之间,2008 500-800之间,无畏的提高FPS有毛用
回复

使用道具 举报

发表于 2015-5-25 05:30:58 | 显示全部楼层 来自 中国–山西–临汾
原因是什么?   另外 那篇鸟文,根本不懂1000FPS和500FPS的区别,瞎BB。
回复

使用道具 举报

 楼主| 发表于 2015-5-27 16:24:19 | 显示全部楼层 来自 中国–浙江–绍兴
jaycs1723658 发表于 2015-5-25 05:30
原因是什么?   另外 那篇鸟文,根本不懂1000FPS和500FPS的区别,瞎BB。

不好意思,前段时间忙,原因我马上写。
回复

使用道具 举报

 楼主| 发表于 2015-5-27 16:29:58 | 显示全部楼层 来自 中国–浙江–绍兴
简单说下,当HLDS的FPS大于1000之后,休眠刷新时间(不知道这个名词对不对,在代码里面就叫sleep time)将小于1毫秒。
这是前提,但是V社在HL源代码(具体在Host_FilterTime这个库里)里有这样一句话我怀疑是根本原因导致加速。

    if ( host_frametime < 0.001 )
    {
      host_frametime = 0.001;
      return 1;
    }

红色的这一行应该被移除从而解决正常方式下服务器超过1000fps后引起的加速问题。

具体的我还在深入研究CS的源代码。
回复

使用道具 举报

发表于 2015-5-27 17:22:02 | 显示全部楼层 来自 中国–山西–临汾
aKang 发表于 2015-5-27 16:29
简单说下,当HLDS的FPS大于1000之后,休眠刷新时间(不知道这个名词对不对,在代码里面就叫sleep time)将 ...

好吧,我看不懂,但是我用CENTOS架设的服务器, SYS_TICRATE 10000时, 服务器FPS可以达到9999,也没有出现加速问题。
回复

使用道具 举报

 楼主| 发表于 2015-5-28 10:21:43 | 显示全部楼层 来自 中国–江苏–南京
jaycs1723658 发表于 2015-5-27 17:22
好吧,我看不懂,但是我用CENTOS架设的服务器, SYS_TICRATE 10000时, 服务器FPS可以达到9999,也没有出 ...

是仅仅配合的-pingboost 3参数还是安装其它加速插件?
回复

使用道具 举报

发表于 2015-5-28 15:24:46 | 显示全部楼层 来自 中国–山西–临汾
只有-pingboost 3 和 sys_ticrate.
CentOS 7
回复

使用道具 举报

发表于 2015-6-3 16:56:27 | 显示全部楼层 来自 中国–辽宁–葫芦岛
,搜索 点通币-1,本操作后
回复

使用道具 举报

游客
回复
您需要登录后才可以回帖 登录 | 注个册吧

快速回复 返回顶部 返回列表